An engineer’s perspective on AI
Recently the Partnership on AI has released a draft on guidelines for participatory and inclusive AI and are asking for public feedback on the draft proposal. My goal is to make an attempt to lend some constructive responses in as many blog posts as I can in the month of October.
First, I wholeheartedly agree with this initiative and in all transparency would love to have a seat at the table for IRL discussion. I really love talking about technology, the process around it and the economic forces that are going to drive the future of what is being coined as artificial intelligence in this moment. I have been in this industry since it was only the CLI on a cathode screen and I can state the barriers I have seen for the last 30 years especially to underserved communities with a high degree of expertise and personal experience.
My initial plan was going to send feedback here as I have worked in the public sector, but I took a moment to reflect on how I have seen the process play out in the past. My prior experience has been that public feedback has the potential to go absolutely nowhere and is prone to confirmation bias by the metaphorical “body politic”. So, I am opting to have a bit of a more transparent process in speaking about my experiences with AI and where I feel the industry is going to have serious issues “down the stack“.
Ok, I just feel it is very important to keep stating this: “AI does not exist“. Especially not the way media portrays it. I had to dig up an article that I was looking at two years ago, and I think it is this one as my browser history is very long. Not to dig on ChatGPT or Open AI - they are amazing - the main impression that has stuck with me since hearing about ChatGPT was that it didn’t look complete or testable to me. And, to use SAFe and Agile parlance: that breaks delivery.
I would have sworn I had seen demos going back to the early 2010s(?) on GTP in an online seminar somewhere I cannot remember and I distinctly remember training on supervised learning from ML pre ChatGTP when I was really heavy into data architecture and warehousing. So, that did not necessarily feel new. To me, the thing that was different was the “generative“ process, which truly blew my mind. Like: “Wow, this is amazing, I want to know this more“. Down to its bones, because it did not “feel“ like the bots that had been around in the 2010s.
And that is the dopamine hit of “AI“. On the surface, it looks to be game changing - and honestly it is. I use the technology right now, but I wish to be an “ambassador“ of AI not a “crusader“.
As of today, Generative AI is not “AI”, in that it replicates the human mind. There is a great definition in the linked article above that AI is a flavor called “Artificial Narrow Intelligence“ that is becoming more adept at pattern matching on a general scale. And in my opinion it is a tool to be used, not a replacement for a person’s own intelligence.
I was recently at an AI conference in Atlanta and was witnessing some demos of products, in a room of perhaps 100+, where there were masters degrees and PhDs in the room asking the same question: “How do we scale this“? Or questions based on implementation of products at scale. I was actually very struck by this, as it felt like the core question of: “Does this even work?“ felt to be a more prescient and effective conversation.
Also, in a room of folks who I am guessing know wayyyy more about math than I, sort of missed practical coding and application basics like batch processing and queuing or how does a team of people use “AI”.
However, these two wonderful gentlemen who were sitting next to me who had just graduated from UGA with their masters in mathematics heard me say: “Does X Product understand data types, such as integers, dates, or other data types in a relational database?“ and the answer I received from the presenter at the lecturn was honest: “It looks at strings and derives their intent from that“ and I thought to myself as a computer scientist - what does that do to the algorithm? Javascript eccentricities are a great example of how a loosely typed language can be fussy and that gave me pause as a programmer, not to mention as a data architect and scientist.
I wrote down on my notepad on my mac: “Does that even work?“ referring to a lack of typed data and the recent graduate sitting next to me leaned over and said: “They’re trying to figure that out.“ and they were very gracious about my undergrad questions on the Challenges with Cosign Similarity - as I was reading on the core premises and trying to up my mathematics game without solving complex math problems. The last time I did anything close to this was in college twenty years ago and my upper mathematics courses were not graduate courses. (I love maths, but did I mention I am not a math major?!) I just want to use a thing and have it give out idempotent answers - that’s how we pass QA and I can calculate ROI and TCO without a degree in mathematics.
Also - I am not trying to call anyone out - the room was electric. My concerns came from my impression that at an academic level it felt like the room had just accepted that this very opaque technology was viable and had this sort of blind faith that it was just “going to work“ because somewhere there was an accepted “theorem“ which was to me an engineer, not necessarily in the truest sense of the word… true. It reminds me a lot about string theory and the criticisms that surround that. (That is another discussion for another blog)
And I kept thinking to myself: “Look before you leap.“ I still have the scars from the dotcom boom and bust - and seeing industries that are compelled to throw more and more revenue into innovation and recoup the costs at the same time to remain profitable. My interpretation of this is that the engineers and execs are all under an extreme amount of pressure to productize, and because they are innovators they will do that. It is super exciting but also super intense. Honestly though, I like that intensity. I like being on the bleeding edge.
And, while I will 1000% look to “Generative Narrow Intelligence Human Interaction Tools” (I wish there was a more marketable acronym for that): my opinion is tempered by budget, efficacy and that innovation comes with quite a bit of false starts, re-definition, success and successful failures.
I cannot tell you how many times I have seen technology try and be too clever and not understand the essentials of human ethos and pathos. I’ve done it myself.
I feel like I am going on here and need to break some things up into more posts. Oh and by the way, this is why a feedback form doesn’t work for me: there is a lot to take in, especially from the computer/data science modalities.
To build on my feedback, I feel we have to start from the basics and move forward because developers and coders coming from bootcamps or folks who are beginning their programming career outside of the walls of higher education are going to use these tools. They may or may not be scientists or they may just be hobbyists or learn from a community college or community center. Folks may just have a project thrown on them: “Go Learn AI“ without any support or leadership like I have had and to me, that is in my opinion the fundamental problem. This is not like learning “depth first search“ it is much much more. The industry has to learn how to teach others how to use these tools.
In the next post I will be more prepared to give better feedback on specifics in the proposal as I am trying to collect myself and my notes. My responses may not be the most scientifically or academically rigid feedback: but it will be human. And to me, that is more important than a data point. Because, like AI, data points and UI are prone to bias as is the form data being presented. Heck, I have bias too.
To conclude, let’s talk about what the prompt “what is AI, really?“ as input into Google search’s new AI feature nd what the generator responds as. My questions to folks are: do you agree with the result? How accurate is the answer? Who is the audience for this response? Judge for yourself, because your brain is more powerful than AI is right now.
From Google’s Search “what is AI, really?“
Machine learning: This includes neural networks and deep learning.Natural language processing: This includes speech and text recognition, analysis, and generation.
Computer vision: This allows computers to see, identify, and process images in a similar way to humans.
Cognitive intelligence: This combines technologies to create AI services that can understand things at a human level.
Virtual agents: These computer mechanisms interact with humans.
Speech recognition systems: These systems can understand the human voice.
Robotics process automation (RPA): This uses preprogrammed software tools to automate labor-intensive tasks.
Data storage: This can be structured or unstructured, and may require a lot of storage. Cloud technology can play a major role in this area.
PS - I :heart: my peeps at Google, I have the greatest respect for them and my time at their offices in the The ATL were awesome. I have worked directly with Google employees in my previous position and they were wonderful. I have spoken to reps at OpenAI - and they were also great. This is not a hit piece nor should be construed in any way as undermining the work of good people. And there are great compassionate engineers out there!
My goal is to speak about my experiences on how technology can discriminate against underserved populations and it is not just limited to model training. From what I have heard and seen at Google - their goal is to be equitable. My experience was that their engineers and colleagues were very transparent and did not try to hide anything. I really appreciated their candor so I can reflect on that here in my feedback. Your experience may be different, and that is important as well.
Hopefully you find this helpful and a beginning to constructive critical thinking.
PPS - Image prompt: “An abstract picture of what AI looks like to AI. Do Androids Dream of Electric Sheep?“
Be good to yourself, and don’t code tired!