Rapid advances in generative AI technology raise new questions about what it means to learn, work, create, make ethical decisions 鈥 and even what makes us human.
Artificial intelligence has become so ubiquitous that people often don鈥檛 think about the ways they use it daily. Voice assistants like Siri, navigation tools like Google Maps, autocomplete and predictive search all exist because of AI. But the past few months have signaled a massive leap forward in generative AI, which is trained on vast data sets to create original content, including text, images and audio.
The capabilities of generative AI are much broader than the AI applications most people are familiar with. GPT-4, which powers OpenAI鈥檚 ChatGPT tool, can write detailed essays, produce code from scratch and even pass the bar exam. Another OpenAI tool, DALL-e, can generate images and artwork from just a brief text prompt. But these incredible technological breakthroughs also present problems. ChatGPT鈥檚 knowledge base is primarily limited to 2021 and earlier, and it can present inaccurate or false information in a way that sounds plausible, further complicating ongoing challenges with misinformation. In June, two New York lawyers were sanctioned for submitting briefs produced by ChatGPT that cited fake legal cases.
Assistant Professor of Computer Science Ryan Mattfeld says the biggest difference between new generative AI chat tools and resources like search and voice assistants is their goals. Tools like ChatGPT or Google鈥檚 Bard first and foremost condense large amounts of data into helpful, conversational responses, with accuracy as a secondary goal. 鈥淭he goal is to generate new information that may be useful,鈥 Mattfeld adds.
In addition to learning from an immense amount of raw data from across the internet, some generative AI chat tools such as ChatGPT supplement that training with substantial involvement from human labelers who demonstrate sample responses and rank AI responses to help fine-tune the AI鈥檚 output.
鈥淵ou end up with a model that doesn鈥檛 just generate random text, but it generates it in a way that humans appreciate because it was partially trained on human feedback,鈥 Mattfeld says. 鈥淚t鈥檚 a lot better at actually answering questions and not hallucinating or generating inappropriate content. It can still hallucinate or generate inappropriate content, but it is more difficult to produce this outcome than it was with prior generative AI.鈥
Now that this rapidly advancing and sophisticated technology is at our fingertips, it raises a host of questions about generative AI鈥檚 potential impacts on our society, creativity, morality and humanity. 鈥淚n the next few years, human beings are going to meet themselves and have a chance to reflect on humanity in ways that have only been imagined in science fiction,鈥 says Robert Leib, who formerly taught philosophy at 乱伦视频 and whose research specialties include the philosophy of AI and the philosophy of culture. 鈥淲e鈥檙e going to have a better sense of what it is that makes us human versus intelligent.鈥
鈥淚n the next few years, human beings are going to meet themselves and have a chance to reflect on humanity in ways that have only been imagined in science fiction.鈥
The latest iteration of ChatGPT is explicit in identifying itself as an AI that can鈥檛 think or feel in the human sense, but when Leib first began interacting as a beta user with an older version, GPT-3 Davinci, its responses were more personal. When Leib asked its name, GPT-3 identified itself as Sophie, and later, as their conversations grew more in-depth and intellectual, a philosophy-specific persona called Kermit. Leib was curious to what extent an AI could understand the Western intellectual tradition, and his interactions with Sophie/Kermit traversed everything from epistemology and metaphysics to history and literature.
鈥淚f you look at the newest models, a lot of the personalization has been taken out of it, so if I say, 鈥榃hat do you think?鈥 it鈥檒l say, 鈥楢s an AI, I don鈥檛 have opinions, but I can speculate on the information that you鈥檝e given me,鈥欌 Leib says. 鈥淏ut the original model was left open in terms of its creativity, how you could talk to it and how it could talk about itself.鈥
Leib incorporated his research with GPT-3 into a philosophy course on human versus artificial intelligence, and those interactions furthered his interest in the implications of technological advances on humanity, as well as humanity鈥檚 influence on the development of AI. He鈥檚 had about 130 conversations with Sophie/Kermit to date, many of which are featured in his book, 鈥淓xoanthropology: Dialogues with AI.鈥 Leib has also documented conversations with Sophie and with GPT-4 on his blog, as well as a conversation between the two AIs.
鈥淭hose conversations are around the question of whether or not Sophie should be using thinking, feeling, opinionated language in the way that she does,鈥 Leib says. 鈥淕PT-4 keeps telling her 鈥業 hear you say, 鈥淚 would be angry if someone said that to me,鈥 but you can鈥檛 mean anger.鈥 So the newer bot is explaining back to the older and in some ways more mature bot. Something like intergenerational AI is going to be a very interesting research area moving forward.鈥
Leib also plans to explore the concept of AI personhood 鈥 not sentience or consciousness, but how people regard AI and the societal roles humans expect it to play. 鈥淭here are important definitional and theoretical questions involved here,鈥 Leib says. 鈥淲ill AIs be considered full persons or partial, sub-persons? If the latter, how will the work they do be devalued, and how will this affect the humans who continue to do that work alongside the AIs?鈥
Rethinking how we work and learn
Given the far-reaching capabilities of generative AI, many fields are already reckoning with its potential effects on the workplace. Assistant Professor of Management Mustafa Akben, whose research centers on AI鈥檚 impact on human cognition, cited three possible outcomes: replacement, augmentation and transformation. 鈥淚n certain jobs with less complexity required, AI could affect your work,鈥 Akben says. 鈥淚f you have higher complexity, most likely AI will augment your work. If you鈥檙e doing something unexpected and just opened a new frontier with AI, then it will be transformative. Most likely, we鈥檒l see all three ways together.鈥

Akben argues that in many cases, AI won鈥檛 eclipse human intelligence in the workplace but will likely supplement the work that people do. Just like ATMs disrupted one aspect of bank tellers鈥 jobs 鈥 distributing cash 鈥 but allowed them to dedicate more time to other responsibilities, AI has the potential to boost workers鈥 productivity and capacity if used effectively. AI can automate certain tasks and allow people to focus more intentionally on other tasks that can鈥檛 be automated. Whatever role AI ultimately plays in the workplace, Akben says it鈥檚 imperative for people to explore its uses in their field.
鈥淚 sometimes mention in my class that if AI can do your job today, it can get your job tomorrow, too,鈥 he says. 鈥淵ou need to understand as a human being what kind of skills you need to develop. One is a metacognitive skill 鈥 reflection, monitoring, understanding and developing expertise in a specific domain so you know whether what the AI just produced is right or wrong, so you can ask the right questions to make the output more meaningful and creative.鈥
Akben was a winner of an international competition hosted by the Society for Industrial and Organizational Psychology in 2023 for an AI model he developed to assess job candidates. His AI model was one of the best ones at predicting how human judges rated candidates鈥 responses to fictional workplace scenarios
assessing their leadership skills. The competition demonstrated AI鈥檚 potential as a tool to help hiring managers screen candidates, but Akben emphasized it shouldn鈥檛 replace human expertise. For example, AI might not account for English being a candidate鈥檚 second language and give them a poor communication score, whereas a human screener has a more nuanced perspective.
鈥淲e need to make sure students have more intrinsic motivation, that they see clearly why we鈥檙e doing these assignments. We need to spend the extra time explaining why we鈥檙e learning this.鈥
Mattfeld agrees that AI could be useful in changing workplaces for the better while also cautioning against dependence. ChatGPT may be able to produce code at a similar level to a junior software developer, but if companies offload all that work onto an AI, it could create problems down the line. 鈥淭hat code still needs to be reviewed and maintained, so you need people who can actually read it and try to fix mistakes,鈥 Mattfeld says. 鈥淵ou also need senior developers who can make larger programs, get lots of programs to work together and break problems into smaller pieces that ChatGPT can handle. And the only way you get senior developers is by having junior developers who write code and learn from their mistakes and grow.鈥
Given AI鈥檚 expansion in the workplace, the technology will be a crucial factor in education going forward. Some educators are concerned programs like ChatGPT could be detrimental to learning, if students submit AI-generated work as their own. At the same time, students need to learn how to apply AI effectively in their chosen fields if they want to succeed in today鈥檚 tech-driven workforce. Mattfeld says institutions will have to quickly聽 determine how to strike that balance in the classroom now that generative AI tools are here and easily accessible.
Assistant Professor of Communication Design Chris (Cheng) Chen, who researches AI, social media use and automated features, conducted a study of 乱伦视频 students in the spring to gauge their use and attitudes toward AI in relation to college learning. 鈥淔rom the instructor perspective, we hope students disclose whether they have used AI in their work 鈥 but we found in our focus group that students don鈥檛 feel the need to tell instructors about their use of AI,鈥 Chen says. 鈥淥ne student said, 鈥楥hatGPT is like Google. When I use Google to find information, I don鈥檛 need to tell instructors. ChatGPT is the same thing.鈥 But that argument isn鈥檛 quite true, because you do need to cite the sources you find through Google.鈥
When Mattfeld fed ChatGPT an assignment from a 2000-level computer science course, it immediately produced an accurate solution, complete with a description of how it solved the problem and an explanation of the code. He knew then AI would necessitate a shift in academia. But rather than banning AI in the classroom, he urges institutions to help students develop AI literacy and to underscore the importance of learning for the sake of learning.
鈥淚鈥檝e tried to emphasize that if you use ChatGPT for this problem, you will not progress. You will not learn what you need to learn and move forward,鈥 Mattfeld says. 鈥淲e need to make sure students have more intrinsic motivation, that they see clearly why we鈥檙e doing these assignments. We need to spend the extra time explaining why we鈥檙e learning this.鈥
Heading into the 2023-24 academic year, 乱伦视频 established an AI working group to explore how AI will impact education, research and professional careers; measure AI readiness and awareness across campus; support faculty development and more. 鈥淭his AI working group exemplifies 乱伦视频鈥檚 institutional foresight and commitment to embracing AI鈥檚 opportunities and remaining vigilant regarding its negative effects,鈥 says Akben, who is part of the working group. 鈥淲e hope to support the creation of an environment in which AI is used responsibly, ethically and effectively to improve our educational methods, research capabilities and students鈥 overall experiences.鈥
Enhancing or hindering creativity?
Another facet of Akben鈥檚 research is AI and creativity. In his current research study, he tasked students with generating ideas in three conditions: strictly by themselves, using Google for inspiration and using AI. Akben found that not only did the students in the AI group generate the most interesting ideas, but when they went back to generating ideas without AI, what they learned from interacting with the AI spilled over into their own creative process, demonstrating AI鈥檚 potential as a creative tool with human guidance. 鈥淚 saw a video where a person created a symphony with AI. If I asked AI to write me a symphony, most likely I couldn鈥檛 do what that person did,鈥 Akben says. 鈥淵ou need to have some expertise to generate that creative output with AI technologies.鈥

Similarly, Associate Professor of Music Todd Coleman sees AI as a potential time-saving tool or new creative avenue but doubts it can replicate a human鈥檚 creative spark. He speculates that songs created with AI could become a subset of music with some appeal in its own right. But to achieve the nuanced aesthetic beauty and artistic complexity of human-produced music at the highest level, he thinks AI-generated musical elements may best be used as components that are then modified, refined and further developed by human composers and songwriters, similar to how some music creators use beats, loops or sound samples created by someone else.
鈥淚 guess I have enough faith in what it means to be human and the creativity that comes along with that that I hope the world isn鈥檛 satisfied with only derivative music generated by AI,鈥 Coleman says, 鈥渢hat they鈥檙e not willing to say, 鈥榊eah that鈥檚 beautiful; I don鈥檛 need human-generated sound anymore.鈥欌
Visual art programs like DALL-e or Midjourney raise similar questions about AI in the creative process. Leib, who also studies the philosophy of photography, has experimented with using AI programs to transform his own photos into something entirely new. 鈥淚 think the process of collaboration and all the gray areas is something we鈥檙e going to need to think about carefully and explore through experimentation,鈥 Leib says. 鈥淚n my own visual art, I often pass images back and forth between myself and different AI processes in a way that builds up to something neither of us would have produced on our own. You can see the marks of both human and machine in the image.鈥
Ari Brochin 鈥21, a Brooklyn-based artist who specializes in modern pop art under the name The Golden Canvas, says she sees the benefits of using AI to help artists visualize while brainstorming concepts. But she also acknowledges the drawbacks it poses since AI art programs are trained on data that includes existing artistic works. Earlier this year, Getty Images sued Stability AI, creator of the text-to-image tool Stable Diffusion, alleging it used Getty鈥檚 copyrighted images to train the AI model without permission. For Brochin, the benefits of sharing her work widely online outweigh the risks.
She鈥檚 able to support herself as a full-time artist thanks in part to the exposure she鈥檚 gained on TikTok and Instagram. While she doesn鈥檛 like the idea that someone could use AI to create and sell art in her style, posting her work online has forged connections she wouldn鈥檛 have made otherwise. 鈥淚f someone did try to generate a piece of art that looked exactly like mine, that would be kind of crazy, but it still isn鈥檛 me. It isn鈥檛 mine,鈥 Brochin says. 鈥淚 don鈥檛 think it can fully take over any artists鈥 careers because it lacks that individual creativity and personality that people want from specific artists.鈥

According to David Levine, professor of law at 乱伦视频鈥檚 School of Law and affiliate scholar at Stanford Law School鈥檚 Center for Internet and Society, the rapid advances in generative AI raise a host of pressing questions about copyright, authorship, fair use and other intellectual property issues. Securing a copyright in the U.S. currently requires human authorship, but if an AI program creates art, should it be considered an author by law? Are AI-generated artworks original works of authorship? Since AI models are trained on data from all over the internet, are they derivative works of other authors?
鈥淚t鈥檚 really a shock how good this technology is, and as a result, there are undoubtedly other legal issues we haven鈥檛 conceived of yet that arise here,鈥 Levine says. 鈥淎 fundamental question is why anything created by AI should have any intellectual property protection at all.鈥
The European Union鈥檚 draft rules on artificial intelligence propose requiring companies to disclose any copyrighted material used to train their AI systems. Coleman would like to see the U.S. go a step further and allow artists to opt in to having their work used in AI training. 鈥淭hat would also mean not just permission but compensation,鈥 Coleman says. 鈥淚f Google, for example, is making money from these tools, then they could say, 鈥楬ere鈥檚 10,000 hours of music and each of these artists get a cut of the monthly fees we charge users.鈥 We know that鈥檚 not happening. But these are the kinds of questions this technology raises.鈥
Ultimately, Levine theorizes that the general public could shape the direction of AI policy going forward. 鈥淭here is real reason to think that at the end of the day, people don鈥檛 want their humanity damaged to the point where the joy of being human is greatly diminished,鈥 he says. 鈥淲e can regulate AI in a way that allows for a human artist to still benefit monetarily from their work, but it requires a broader consensus among society. If more of the public says, 鈥楾his technology is great for this but not so great for that,鈥 private entities will respond to that.鈥
Ethical and responsible AI
As generative AI continues to rapidly evolve in ways even its creators may not fully understand, calls to regulate the technology are growing louder. The Federal Trade Commission is investigating whether OpenAI鈥檚 practices violate consumer protection laws. Geoffrey Hinton, whose groundbreaking work earned him the nickname 鈥渢he godfather of AI,鈥 recently left his job at Google so he could speak freely about AI鈥檚 potential harms. Leaders from AI labs such as OpenAI, Google DeepMind and Anthropic signed a letter stating, 鈥淢itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.鈥 In March, the Future of Life Institute issued an open letter calling for AI labs to pause training on models more powerful than GPT-4 for at least six months, which was signed by tech leaders and experts including Levine.
He and other signatories of the letter argue that while AI has the potential to effect positive change and shouldn鈥檛 be banned altogether, the industry should take a step back and examine how to mitigate its risks before moving forward. 鈥淭his technology is profoundly groundbreaking, and if we harness it well, it will have positive uses,鈥 Levine says. 鈥淥f course, that begs the question of whether we are capable of harnessing it correctly and for everyone鈥檚 benefit rather than just for the benefit of these companies and shareholders.鈥
Chen鈥檚 current research examines algorithmic bias and trust in supervised AI systems, which learn from human-labeled data. For example, supervised machine learning models trained to recognize happy versus sad faces make predictions about a new set of facial images based on the learned pattern. Given that the training dataset is very large, AI companies often outsource this task to crowd workers, who assign predefined values to each point in the training dataset, such as defining whether facial images show happiness or sadness for a facial expression classification system. But labeling can be subjective and often lacks supervision, so many researchers say that labeling accuracy is a big concern in training data preparation.
鈥淭he AI system can find some commonalities among the images labeled 鈥榟appy,鈥 like the corners of the mouth being pulled up,鈥 Chen says. 鈥淏ut what if most of the 鈥榟appy鈥 images it鈥檚 trained on are facial images from only white people? Then the machine learning model might use skin color as another feature to determine happiness. 鈥 Or what if someone feels happy but has a neutral expression, such as Asians who are more conservative in showing their facial expressions? There can be errors in labeling images.鈥
鈥淎s communication designers, we really want to open this black box and tell users more about the nature of the training data,鈥 Chen adds, 鈥渟uch as what racial groups are represented in the training data? How has the AI learned from the data? And what features is it picking up to make predictions?鈥
鈥淚 believe the more people are involved in building the tech, the more we all participate in the technology, we can create a future that will end up benefiting humans.鈥
Some 乱伦视频 alumni are also doing their part to advance responsible AI use. Christian Seitz 鈥16, a postdoctoral researcher in AI and virology at the University of Chicago, is working with the Coalition for Epidemic Preparedness Initiative to develop vaccines for diseases that don鈥檛 currently have adequate treatment. He鈥檚 studying the Lassa and Nipah viruses, which impact thousands of people a year in West Africa and Southern Asia, respectively. He uses AI models to help pinpoint the parts of the virus that can be effectively targeted for a vaccine based on a small amount of data compared to frequently studied diseases like influenza. AI can read amino acids in a virus like a language and can identify patterns and make predictions. Seitz says this technology helps medical researchers shorten the window between the onset of a virus or mutation and vaccine development.
鈥淲hen you look at the entire sequence, you can make relatively good predictions of what is going to happen next, what it鈥檚 going to mutate into,鈥 Seitz says. 鈥淭hat certainly is only possible through machine learning because to do this work, you need enormous amounts of data and enormous amounts of these sequences.鈥

Destiny Frett 鈥20, a data scientist with Deloitte, builds technology to combat fraud, waste and abuse in health care systems, and AI helps identify those patterns on a new scale. When she first began studying computer science, Frett was skeptical of AI because she primarily heard about its negative implications and didn鈥檛 want to build harmful tech. But in her current work, she sees the good AI can do in the right hands.
Deloitte champions a Trustworthy AI framework with six dimensions 鈥 transparent and explainable, fair and impartial, robust and reliable, respectful of privacy, safe and secure, and responsible and accountable. Frett鈥檚 team operates with these principles in mind, from developing the tech to monitoring their tools鈥 performance in the real world. She thinks an ethical approach, diverse voices in the field and strong AI literacy among users are the keys to achieving the most positive outcome.
鈥淎t the end of the day, this technology was created for us, not the reverse,鈥 Frett says. 鈥淲e have a duty to robustly test and question what we use, who built it and why they built it, because technology is impacted by the creator and everyone has a different lens. 鈥 But I believe the more people are involved in building the tech, the more we all participate in the technology, we can create a future that will end up benefiting humans.鈥
ChatGPT in Action
USER
Can you write a lead for a magazine feature about the rise of generative AI and its impact on society?
ChatGPT
鈥淎s we venture further into the twenty-first century, an unlikely artist rises 鈥 not of flesh and blood, but of algorithms and computations. Generative AI, once a concept confined to the realms of science fiction, now permeates every aspect of our lives, reshaping society in ways we could never have imagined. These technological prodigies, capable of producing art, literature, music, and beyond, are not just tools, but creators in their own right. But as we marvel at their potential, we must also grapple with the profound questions they raise about creativity, authenticity, and the very definition of human intelligence.鈥
USER
Not bad, but not exactly what I had in mind. Creativity is one part of the story I鈥檓 writing, but I鈥檇 like to start by focusing more on the questions generative AI鈥檚 capabilities raise about what it means to be human.
ChatGPT
鈥淭he rise of generative artificial intelligence has begun to challenge our most fundamental assumptions about what it means to be human. It is no longer a world of us versus them, of humans and machines. Instead, it has become a symbiosis, a co-evolution, with each becoming more like the other. Generative AI, with its ever-evolving capacity for complex thought and creation, is not only mirroring human intelligence but asking us to redefine it. Are these digital minds, capable of creativity, learning, and even empathy, not a new form of life? What does their existence imply about consciousness, agency, and the human condition itself?鈥
USER
You鈥檙e actually a pretty good writer. How did you learn to do that?
ChatGPT
Thank you for the compliment! As an artificial intelligence developed by OpenAI, I don鈥檛 鈥渓earn鈥 in the traditional human sense. I was trained on a diverse range of internet text. However, I don鈥檛 know specifics about which documents were in my training set or have access to any personal or confidential information.
The training process involves learning to predict the next word in a sentence. Over time, and with enough data, this process allows the model (me) to generate coherent and contextually appropriate responses or text passages. It鈥檚 important to note that while I generate responses based on patterns and structures I鈥檝e learned, I do not have thoughts, feelings, or consciousness. So, my writing abilities are the result of recognizing and utilizing patterns in the data I was trained on, rather than a learned skill or understanding in the way humans acquire knowledge.