demonstrate AI skills to potential employer

Demonstrate AI skills to a potential employer

-

No one can afford to ignore AI—
especially job seekers.

Here’s how you can demonstrate AI skills to a potential employer



BY


We’ve been here many times before: a new technology paradigm shift happens and brings on an urgency to familiarize ourselves with the skills necessary to not only be knowledgeable but proficient. It also divides people into two categories: those who embrace the new technology, and those who get left behind. Past technology innovations like mobile and the web happened over many years and most people were able to adapt to change, modernize their skills, and adjust to new patterns and even types of work. AI, however, is a new and more formidable challenge for workers. Customer and employer demands for AI are here now in 2024, not five years from now, and people are struggling to keep up.

Bottom line: no one can afford, theoretically or financially, to ignore AI, especially job seekers.

All job candidates—and I do mean all—are going to have to be AI literate. And it won’t be enough to just say you have the skills. The phrase ‘fake it until you make it’ will not work with this technology. As someone who works with AI everyday, here are my thoughts on how candidates will likely need to demonstrate their skills in order to stand out to potential employers.

GENERALIST VS. SPECIALIST: WHEN IT COMES TO AI SKILLS, YOU WANT TO BE A BIT OF BOTH


It’s been 14 months since the launch of ChatGPT and we are still very much in the hype phase.
In fact, a recent BCG study that polled 1,400 executives worldwide revealed that 90% of executives said they were “either waiting for GenAI to move beyond the hype or experimenting in small ways” within their companies. This leads many to wonder: If employers and executives are still trying to figure out how to use AI, is it better to apply as an AI generalist or specialist?

As a refresher, if you are a generalist, you have a wide range of skills and knowledge versus a specialist who is considered a subject matter expert. In the case of AI, be a bit of both. If you jumped on the ChatGPT bandwagon early, you might now be considered a specialist. However, if you have also extensively used Google’s Bard or Meta’s Llama 2, you could classify yourself as a generalist who has a wide range of chatbot skills and experience. Both are beneficial.

Additionally, it will be essential that candidates understand what large language models—an AI that is designed to understand and generate text-based language—can do and what it can’t do while also being able to demonstrate how a team, company, or you specifically incorporated AI into their work. For specialists like engineers, you will need to go one step further and demonstrate that you have been building AI services and tools.

People want to move fast in AI and candidates need to be able to show that they have a track record of applying the technology to a project. While reading papers, blogging about AI, and being able to talk about what’s in the news shows curiosity and passion and desire, it won’t stack up to another candidate’s ability to execute. Ultimately, be ready to define and defend how you’ve used AI.

HALLUCINATIONS AREN’T IN YOUR HEAD. THEY ARE A REAL PROBLEM IN AI AND NEED SOLUTIONS

Not only do you need to be able to define and defend how you use AI but you need to explain how you avoided the risks of AI hallucinations, which continues to be a common problem for a wide range of AI users—especially engineers. Hallucinations are when an algorithm confidently provides a wrong answer from predictions on the next word, number, or element code in sequence. This results in extra work because engineers and developers have to debug these issues but more importantly, they must have the required knowledge to do this effectively.

This isn’t likely to change, especially for engineers. While knowledge assistant tools are emerging to help business users avoid hallucinations, a recent Aporia survey, 89% of machine learning (ML) engineers who work with generative AI say their models show signs of hallucination. Developers should bring examples of how they identified and found viable, long-term solutions for these issues and mistakes as this is undoubtedly going to be an issue as AI providers try to be more careful about the data and information that is used to train AI.

What’s more, candidates should also—where possible and relevant—explain how they prevented legal issues or claims around copyrighted material used when training AI. While candidates don’t need to have extensive legal knowledge or experience, it’s essential that they have enough knowledge and experience to work closely with legal teams during these situations.
The OpenAI/New York Times lawsuit is just one example and it won’t be the last.

Overall, AI skills will continue to be critical for all jobs, not just those working in the tech industry. One thing to note: employers should not expect candidates to do all the heavy lifting; it’s vital that they have the proper processes in place to upskill current and future employees. The ideal candidate in 2024 will have demonstrated AI skills along with the ethical and practical knowledge of what it can and can’t do. It’s not about updating your résumé; it’s about updating your skill set to make you invaluable to any employer.



MICHAEL BECKLEY
Founder | Chief Technology Officer | Board Member
Michael Beckley is a founder of Appian (Nasdaq: APPN), Process Automation & Low-Code Technology Pioneer, Author, and Investor.

Appian.com

Co-author of the book: HyperAutomation

Advisory Boards:

Center for New American Security (CNAS)

CNAS people

University of Virginia Computer Science Corporate Advisory Board

Northern Virginia Technology Council

Technology council

Past Board Positions:

ContactEngine (acquired by Nice, Ltd.)

Appian Corporation

Meld nieuws
Media (print)



Automatisering Gids