AI and the future of work: will Artificial Intelligence affect the workforce?

AI and the future of work: will Artificial Intelligence affect the workforce?

Life Style


Experts thought that Artificial Intelligence (AI) would first automate the menial tasks. Driving, grocery delivery, and the like. For eight straight years, Elon Musk has promised full self-driving capabilities in his Tesla cars. There has been significant progress towards this goal, mostly by companies other than Tesla. But sipping chai as an autopilot drives us from Marina Beach to Mylapore is still a future too far away.

We may have made terrific — or terrifying, depending on how you see it — progress on another front. Today, AI is writing competent code, producing protein structures as well as product strategies, jotting down screenplays, reimagining our sorry bodies as ripped superhero avatars, generating film, playing music. All this in a fraction of the time it would take a semi-competent professional with current tools. News updates might have you thinking that your plumber Perumal’s job outlook shines brighter than your daughter Paromita’s, who is plotting a career in software engineering. So, is an AI about to make your job obsolete?

What is AI anyway?

It helps to have an accurate mental model on what AI systems actually are. Think of them as a sort of mapping: from inputs to outputs. The spam detector running underneath Gmail is an AI that maps from the content of emails to ‘spam’ or ‘ham’ categories. Dall-E, the AI art generator, is a mapping from English text to their visual representation.

AI systems inside Waymo cars (formerly the Google self-driving car project) map from current location, speed, relative position of surrounding vehicles, desired destination and related cues to the next desirable driving action. Their driverless taxi fleet recently launched in the U.S.

Will an AI actually take my job?

AI historians will record this decade as the coming of age of generative AI technologies. This loosely-defined term differentiates AI models that generate images, music, and text from those that make decisions or do forecasting.

Text and image generative AI models released this year are more than decent at what they do. The proof is in the Gofundme campaign to protect artists from AI technologies that raised $150,000 just this week. Or the World Economic Forum report that estimated that by 2025 machines will displace 85 million jobs even as they create 97 million new job roles.

The lesson from history is that we should take such figures with more than a pinch of salt. In light of the AI advances in 2022, the Davos set would do well to revise their top 20 jobs under the decreasing demand category. They have accountants, payroll maintainers and bookkeeping clerks ranked at the top three. If anyone is losing their jobs, it is the ones who wrote the Future of Work 2020 report: accountants will do just fine (they survived Excel’s best efforts).

Rarely does technology eliminate entire jobs. In practice, technologies only automate tasks and reduce the labour involved in a job. A farmer and his oxen used to plough an acre of land per day. With a tractor, the same farmer can now plough 15 acres in the same time window. Generative AI tools will introduce a similar rate of productivity increase in knowledge work. They will also render certain tools and skills obsolete in the process. Such productivity improvement and resulting profit often brings greater employment. This was the case in the United States. Between 2008 and 2018, across 11 jobs considered at risk from AI, employment rates, on average, went up by 13%.

New technologies also create entirely new job categories. There were no pilots before the aeroplane. It is hard to imagine what new categories of employment generative AI will end up creating. Suggestions abound that prompt engineering — figuring out the best way to communicate your intent to an AI — could be one. I am not sold. First, as the AI researcher Andrej Karpathy has observed, such a role is more akin to a psychologist. Second, all we need is another AI that maps from plain English to the more descriptive prompt queries that generative AI models prefer. Ergo, don’t waste your time mastering skills for an imaginary job.

The cost of creativity

How much does it cost to train a generative AI model like GPT-3? Around $4.5 million, by some estimates. We cannot know for certain because OpenAI, despite its name, is quite secretive about such details. Tom Goldstein, a computer science professor at the University of Maryland, showed that it costs the company roughly $100,000 per day, or $3 million per month, to keep ChatGPT running at current demand.

At a more granular level, it costs $0.0003 to generate a word out of the language model. If you get ChatGPT to write your MBA dissertation, at 12,000 words, it is going to cost less than $4 in computational costs. We also know that it cost around $600,000 to train Stable Diffusion 1, an open-source text-to-image model. If you want to use Stable Diffusion for image generation with standard settings, i.e., 512 x 512 resolution, it costs $1 to generate 500 images. Draft generation by interns is dead.

Some have proposed a three-step process to understand how humans and AI will collaborate on cognitive tasks in the future. The requirement comes from the humans, they consult an AI system for a menu of options, and then take the most aligned one and refine it further for their purposes. This is a helpful framework. But notice that AI is not really altering the structure of cognitive value production in institutions.

In the future, there may be little incentive to hire and mentor junior staff. However, I suspect the kids will be alright. What is about to be shattered is the current paradigm of how one acquires expertise. We need to restructure our education systems as well as our traditional career trajectories as a matter of urgency. ChatGPT produces B+ grade MBA essays on niche topics like relational contracts. High school teachers and college professors are in for a rude awakening.

How can we thrive in an AI-first world?

The answer is to become more human. Qualities such as agency, commitment, empathy, and perseverance will become more valuable in the workplace. The down side: we often structure these traits in our first jobs. So, where we might acquire them — when starting out as junior staff is nearly impossible — is a question for which I have no answer at the moment.

As a general recipe though, caring more deeply about your clients and colleagues, thinking more critically about your job and its place in the world, and mastering the finer details of its knowledge base should serve anyone well. Writing boilerplate code won’t get you very far as a software engineer. Instead, start understanding latency, algorithmic complexity, and clean abstractions. Across the board, there is going to be less incentive to tolerate the clever but combustible types in workplaces. So maybe a bit of humility and camaraderie is in order for those of us working white collar jobs.

The generative AI party is only getting started and the party will be mint.

Write like Shakespeare, paint like Picasso

So how good are these generative AI systems? Scarily good in certain contexts. This picture of Marilyn Monroe and John F. Kennedy was never taken: it was generated by a text-to-image AI model. It takes a bit of effort to notice the flaws: Monroe’s shoes and how her feet fit them; the unnatural dip in her right shoulder; the asymmetry of her collar bones. Even as you parse the picture for its flaws, remember that we are only a year into this technology. Bear in mind also that a designer would have to spend, at the very least, several hours to Photoshop something similar. Upon receiving the text prompt, the AI model would have conjured this Polaroid up in under five seconds.

With text generation, the story gets even more wild. In June this year, a Google engineer who spent an unhealthy amount of time chatting up LaMDA contended that the chatbot was sentient. The poor chap was later laid off. More recently, Michelle Huang, a New Yorker working at the intersection of art and tech, tweaked GPT-3, an OpenAI language model, with her own journal entries from when she was a child. She claims to have ‘healed’ from being haunted by the metaphorical question of “would your eight year old self be proud?” upon hearing her inner child say “I’m proud of you” via the chatbox. I asked ChatGPT, an improved GPT-3 tuned for conversations with humans, how AI will alter the future of work. Here is the response, with some sentences edited out for brevity: “One potential use is to assist with tasks that are repetitive or time-consuming, allowing humans to focus on more creative and strategic work… language models like myself and ChatGPT have the potential to improve efficiency and productivity [but] should be used in conjunction with human workers rather than replacing them. The use of these technologies should also be carefully considered and implemented in a way that is ethical and respects the rights and needs of all employees.”

Bar the display of Anniyan-like multiple personality disorder tendencies – ‘myself and ChatGP’ when it is the ChatGPT – the answer was reasonable, the sort you would expect a first year computer science undergrad to churn out. But this is not ChatGPT’s only limitation, however. ChatGPT is good at prose, not so great at reasoning, bad at mathematics, downright terrible at truth telling. Just the combination of traits that makes one a great fit for corporate consulting.

The Colombo-based writer is not a prophet. He is an AI engineer. 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *