ChatGPT has been the talk of town for sometime now given that it has suppposedly passed a law exam, forced a relook into online education submitted work and won capital injections from Microsoft. Artificial intelligence (AI) has been with us since the days of the animated paperclip (if you can remember) in Windows of operating systems past. Chess engines have become so powerful that beyond international chess, the computer has also beaten the human champion in “Go”. What does the future hold for us?
I attended a mobile photography class recently and learnt that the mobile smartphone has an AI that processes the images before we use any app to make edits. That step is still done by humans who use DSLR to capture images. They process that raw form, before editing is done with Photoshop or any other similar editing tool. AI will do a lot more grunt work in time to come.
ChatGPT will not only have to compete with Bard, but with other competing AI in development (supposedly March2023 should see one launched in the east). People will naturally make comparisons of the outputs of various AIs when seeking answers. Does that make human research redundant? Short answer is no. While people prefer a consolidated answer over a bunch of links to follow up on, what the AI does not know (or fed the info as yet into its database), human research is needed to push the boundaries further.
So what does the future hold?
Where statistics are not stable, and especially encountering a scenario not documented before, humans have the interpretative situational assessment to do. I play international chess competitively in my school years, left the game and got back into it when chess engines are more prevalent now. How players today work with those chess engines will be how we work with AI in future. In the case of chess, stats and odds for many moves and situations are known and stable already. However, humans will push the boundaries of less documented scenarios for the machine to learn. I will also remember a Chess Grandmaster saying that a move in the opening stage with 60% odds is still a playable move. Do we give up on experimenting situations with known lesser odds? Imagine that you have the best recipe for roast chicken and you hand it to 10 top chefs, surely there will be 10 different outcomes. Everyone can have the same, similar or even identical knowledge. But the execution will be a different story isn’t it?
Change is upon us.
Being long enough in my current industry, I heard narratives of stretching out the home loan as long as possible to allocate monies to invest. While that suits the past, it is less so today. What changed? Interest rates. This proves that narratives change with changing circumstances. AI is going to accelerate change, already I am reading of layoffs at big tech firms, when it wasn’t such a distant memory of time that tech jobs paid so well. I guess the pendulum of favor is on the move again. So is the pendulum of scrutiny.
What can we do for ourselves now? Well of late I have been experimenting with ChatGPT prompts. Just like many have learnt to use Microsoft Excel, learning how to manage the AI with prompts, is a start to working with AI. Knowledge asymmetry will continue to narrow so focus on execution. Creative persons I know, use Midjourney and DALL-e to generate initial images then touch up (execution) to enhance the final product output. The future is amazing. Yes some jobs will disappear but new jobs will be created.
Remember also, that grunt work and tasks can be delegated away, but accountability and responsibility cannot. (Bad precedent to set in the court of law as well) So the day to be worried is when human ditch responsibilities and accountability to the machine.
Better to learn working with AI now than later.
Featured image is that of a chameleon, done by DALL-E when I wanted one to display an ability to switch between long or shorting the market. Yes my model portfolio allows shorting the market.
You must log in to post a comment.