End of two years
This week marks the end of a two year experiment to do something different. That was the only criterion. To do something different I had to look at what I was currently doing. That was easy. I spent all my days working on a computer, meetings were on a screen and my role was morphing from programmer to looking at a screen on a computer talking with other people about other peoples computers.
So I stopped working with computers, stopped programming and stopped working. I got busy though, very busy.
So what’s next? Spoiler alert, I’m going back to programming - although I reckon there’s only 3 years left in that industry.
I left employment just as mainstream generative AI was starting to poke it’s head above the business parapet and now I’m returning 2 years later glad to have missed the inevitable calls to give input on a corporate AI strategy.
My first actual encounters with generative AI were in late summer of 2022 through art and the creative industries. I was at a group exhibition in my hometown and the buzz among the artists upstairs at the exhibition was that one of the other artists had made their pieces in the exhibition by ‘typing their poetry into an AI over and over again and that had produced the pictures’. When I entered the room that contained the generated art, a gaggle of people surrounded the artist in question.
I’d been asked what I thought of AI by other creative who were starting to dabble, but here was an example of AI being used to produce work and it was selling. I bought one. The art was good irrespective of how it was created and the artists has gone of to produce even better work in the intervening years. It felt like an inflection point that could not be ignored. Here I was in my hometown, population 38,000, aged 52 and probably in the younger age range of the attendees seeing generative AI become normal. Now 2.5 years later is it commonplace and as I reenter the workplace a very useful consumer level tool is breaking out of its Ai to Human transactional ping-pong bubble. Integrations to the wider digital world are launching and Ai is breaking out of that lump of glass, silicon and aluminium we cradle in our hands.
This now raises the questions,
“What are application interfaces for?”,
”Why would AI use human coding languages when they can just create their own?”
We have arrived at that moment that aligns with Star Trek where the crew make voice requests of an unseen computer and receive results in return. A route to the Enceladus quadrant is plotted, an alien compound is analysed, the status of the ship’s structural integrity monitored and the whereabouts of Captain Picard and his crew tracked at all time. Hell, the USS Enterprise’s computer could even make a pot of Earl Gray tea for you. But the Teasmade has been around since the 1930s so no huge advancements there.
So I’m back and generative AI is here too. I’m not worried.
PS, from Wikipedia. “On 19 September 1891, Charles Maynard Walker of Dulwich published details of an “Early Riser’s Friend” in Work magazine. The article was detailed and included illustrations, but the teamaker was never patented.”