July 26, 2024


If your carefully constructed life plan has been derailed by bank time, cravings for fast food, drinking too much and not contributing to the company pension, it might be time for a chat with your future self.

Without ready access to a time machine, researchers at the Massachusetts Institute of Technology (MIT) built an AI-powered chatbot that simulates a user’s older self and spits out observations and pearls of wisdom. The goal is to encourage people to think more today about the person they want to be tomorrow.

With a profile photo digitally aged to show youthful users as wrinkly, white-haired seniors, the chatbot generates plausible synthetic memories and draws on a user’s current aspirations to spin stories about his successful life.

“The goal is to promote long-term thinking and behavior change,” says Pat Pataranutaporn, who works on the Future You project at MIT’s Media Lab. “This can motivate people to make wiser choices in the present that optimize for their long-term well-being and life outcomes.”

In one conversation, a student hoping to be a biology teacher asked the chatbot, a simulated 60-year-old version of herself, about the most rewarding moment in her career. The chatbot said it was a retired biology teacher in Boston and recalled a special moment when it helped a struggling student turn their grades around. “It was so gratifying to see the student’s face light up with pride and achievement,” the chatbot said.

To interact with the chatbot, users are first asked to answer a series of questions about themselves, their friends and family, the past experiences that have shaped them, and the ideal life they envision for the future. They then upload a portrait image, which the program digitally ages to create a likeness of the 60-year-old user.

Next, the program feeds information from the user’s responses into a large language model that generates rich synthetic memories for the simulated parent itself. This ensures that when the chatbot responds to questions, it uses a consistent background.

The last part of the system is the chatbot itself, powered by OpenAI’s GPT3.5, which presents itself as a potential older version of the user who can talk about his life experiences.

Pataranutaporn has had several conversations with his “future self,” but said the most profound was when the chatbot reminded him that his parents won’t be around forever, so he should spend time with them while he can. “The session gave me a perspective that still has an impact on me to this day,” he said.

Users are told that the “future self” is not a prediction, but rather a potential future self based on the information they provided. They are encouraged to explore different futures by changing their answers to the questionnaire.

According to s preprint scientific paper on the project, non-peer-reviewed trials involving 344 volunteers found that conversations with the chatbot made people feel less anxious and more connected to their future selves. This stronger connection should encourage better life decisions, Pataranutaporn said, from focusing on specific goals and regular exercise to eating healthy and saving for the future.

Ivo Vlaev, a professor of behavioral science at the University of Warwick, said people often struggled to imagine their future selves, but doing so could drive greater persistence in education, healthier lifestyles and more prudent financial planning.

He called the MIT project a “fascinating application” of behavioral science principles. “It embodies the idea of ​​a nudge — subtle interventions designed to guide behavior in beneficial ways — by making the future self more salient and relevant to the present,” he said. “If implemented effectively, it has the potential to have a significant impact on how people make decisions today with their future well-being in mind.”

“From a practical point of view, its effectiveness will likely depend on how well it can simulate meaningful and relevant conversations,” he added. “If users perceive the chatbot as authentic and informative, it can significantly influence their behavior. However, if the interactions feel superficial or gimmicky, the impact may be limited.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *