Lab Notes Does AI threaten the…

Does AI threaten the value of human labor?

🍵 Brew Digital Ethics, Generative AI 5 min read
Digital Ethics, Generative AI
A cartoon illustration of a medium-length-haired woman with a friendly smile, wearing a comfy sweater against a baby blue abstract blob behind her, further enhancing the chill vibes.

🌱 Vertico Labs Partner

Alumni B26 @ 500 Startups
Former VP @ GovExec & Technical Co-founder @ The Atlas for Cities

Published 3 weeks ago
Updated 3 weeks ago
Ideal for Knowledge Guardians
With abundant sympathy, Knowledge Guardians internalize their own experiences when learning and seek to create a path to knowledge acquisition for others that's easier, more enjoyable, and encouraging.

P.S. Lab Notes are written for and organized by Persona Types👤 persona types – we wanted to sort our content by the way people think and not across topics, because most topics are beneficial for people of all different backgrounds in product building. Our method allows for readers to hone in on what suits them, e.g. via perspective, lessons learned, or tangible actions to take. .

Knowledge Guardians
Knowledge Guardians

Exploring the value of human labor has become especially poignant amidst the tech industry’s fluctuations, rapid AI developments, and layoffs. This question often emerges from a place of fear rather than an argument for sustainability in human capital.

Yet, focusing on enduring qualities, (like the lasting impact of household appliances 🤞, life-long friendships, and timeless teachings) can offer a more constructive perspective on our professional worth, steering us away from fear-driven considerations.

This lab note reflects a developing thought-process, or a 🍵 Brew , and will likely be updated frequently over the next few months. We could really use your input to help further deepen our thinking and perspective, so do drop us a note in our feedback form at the bottom of the page to let us know your thoughts!

Tackling fear of AI taking our jobs. It’s been around a long time. Example: 1965…

Before ChatGPT became a household name, I delved into a book, “Super Forecasters: The art and science of prediction”, to hone my foresight skills, crucial for crafting effective roadmaps and product strategies. Surprisingly, I stumbled upon a section discussing AI, which I had previously overlooked.

…in 1965 the polymath Herbert Simon thought we were only twenty years away from a world in which machines could do “any work a man can do”, which is the sort of naively optimistic thing people said back then, and one reason why [David] Ferrucci–who has worked in artificial intelligence for thirty years–is more cautious today.

On considering the value of humor labor in ai-driven future, we offer a hopeful outlook: the value is rooted in our ability to predict and make decisions, especially that affect humanity.
Photo by Diego PH on Unsplash

Sustainable Value: Prediction, Judgement, & Originality

Shedding the fear of AI allows a clearer exploration of our intrinsic value. Consider the skills of prediction and judgement, both of which we use to survive as humans in everyday life. We hone and utilize these skills in professional settings and contribute to other skills in strategy, creativity, and originality.

Despite AI’s advancements in simulating human understanding, as David Ferrucci notes, originating meaningful insights remains a distinctly human domain.

Machines may get better at “mimicking human meaning,” and thereby better at predicting human behavior, but “there’s a difference between mimicking and reflecting meaning and originating meaning,” Ferrucci said. That’s a space human judgement will always occupy.

Drawing from forecasting, our ability to predict gives us a pulse on our own culture.

In forecasting, a strategy to improve forecast accuracy is to continually incorporate new information to update predictions. We see this concept in professional settings, such as the use of agile project management strategies. If we abstract this to having a pulse on news, industry, culture, and global events, we can see that we inform ourselves in order to make better decisions.

AI models struggle to understand conceptual significance of emerging ideas

Our collective experience is constantly evolving. There is opportunity for emerging ideas to develop. This especially pertains to things that have not been written about yet. AI will not have the context of significance nor the data to be trained on it. We see this already, where AI cannot keep up with trending slang attributed to newer generations. Yet, even if it could, it cannot really understand why we care about such slang. The point becomes more obvious when you consider the impossibility of AI predicting what will become slang in the future.

In the referenced article:

Organically encrypted through shared experience, slang is difficult for anyone outside the given speaking community to reproduce.

The beauty of predictive and judgement based skills in the context of our lived experience is that it often speaks to us or resonates with us. This is why human value is so deeply intertwined with art, music, writing, perspectives, and more.

Quoting Caleb Madison in the same article:

Artificial intelligence, in contrast, is disconnected from the kind of social context that makes slang legible. And the sterile nature of code is exactly what slang—a language that lives in the thin threshold between integers—was designed to elude.

Using AI tools to make culturally important content needs human involvement

AI should not be expected to output predictive or judgement based responses, and definitely not without human involvement. AI already hallucinates, or provides confidently wrong answers, to simple factual prompts. Leveraging AI for predictive use-cases requires human involvement, just as it does for factual or data-gathering quests.

There is an ethical question behind generating works where AI is in the driver’s seat of invention, because it is likely stealing from other real creators. We will touch on this below, but the important thing is to realize we need to ensure we’re involved so we can use our skills to make decisions around how AI is used.

The necessity for human oversight with generative AI tools

I think using generative AI tools requires a three-part human involvement and oversight process. It requires using our predictive and judgement-based skills to analyze generative AI output.

1) fact checking
2) decide what to do with the information
3) validate ethical use.

In the last point, it’s especially important to scan for human invention in the output, as its likely plagiarized output. For example, if you asked AI to write you an essay about bees, it’s unethical to submit the content as your own (requirement 2 and 3 would be failing).

Coming in Update: Plan to use AI, even in prediction situations

In an update to this lab note, we will examine existing use-cases for AI and prediction. Human discernment for when to use predictive approaches to AI and when not to is another example of our human value. We decide what sort of domains would benefit from predictive AI efforts.

Continued Reading: Optimism for human labor in an AI-driven future

For further optimistic reading despite a wary tech climate, check out this relevant issue “ Tech is going strong, with asterisks” of our newsletter, The Pipette.

Keep Exploring Lab Notes

Enjoyed the article? It’s part of our Lab Notes – a compilation of long-term learnings and emerging thoughts from our journey in the tech industry. Learn more or check out some additional lab notes below.

Abstract shape that looks like a blob of baby-blue paint, positioned behind the subscribe interface.
Abstract shape that looks like a blob of baby-blue paint, positioned behind the subscribe interface.