The Futures - No. 43
The fatal flaws in AI’s training / Gen AI winning elections / Gen Z living the hermit life
In this issue
The Quantumrun team shares actionable trend insights on why large language models are incompetent content creators, fears of generative AI’s dominant role in democratic elections, the future robot construction workers on the Moon, and why you won’t find Gen Z crying in the club.
Future signals to watch
South Korean scientists have developed a protein-rich "meaty" rice using lab-grown beef cells and fish gelatine, potentially serving as an innovative food source for famine relief, military rations, or space exploration.
The OpenTitan coalition has introduced the first commercial silicon chip with open-source, built-in hardware security, marking a significant advancement in the open hardware movement.
Major US and European companies are using a startup, Aware, to monitor employee communications on platforms like Slack and Microsoft Teams, aiming to understand risks and sentiments within their organizations (AI Big Brother has entered the chat).
The China National Space Administration is advancing the Chang'e 8 mission for a 2028 launch by seeking international partnerships to develop 14 scientific instruments, including a multifunctional 100-kg robot for lunar soil manipulation and construction tasks.
Researchers in Vienna have developed a groundbreaking 3D printing technique to create biocompatible spheres for assembling living tissue, notably advancing lab-grown cartilage production.
University of Glasgow researchers have developed RoboGuide, a robotic guide dog equipped with advanced navigation and communication technologies, to assist those with visual impairments.
Startup Source Global (formerly Zero Mass Water) uses solar-powered hydropanels to extract drinkable water from the air, making sustainable water production feasible even in extremely dry conditions.
Gen Zers are increasingly adopting a more home-based lifestyle, socializing less, and showing a preference for indoor entertainment like Netflix and social media. This trend, influenced by factors ranging from economic constraints to overprotective parenting, has prompted concerns about the delayed achievement of adult milestones, and youth mental health and social engagement issues.
Culturally // Trending
YouTube → Godzilla x Kong // X → Usher’s Super Bowl Performance // Reddit → The private jets that flew to the Super Bowl // TikTok → Mark Zuckerberg’s “review” of Vision Pro // Instagram → Mob destroys a Waymo driverless car // Spotify → “Bandit”
💡 Watch Quantumrun’s trend videos on Linkedin & YouTube & Instagram & TikTok
📑 AI’s training method is what makes it an unreliable content creator
A 2024 study published in ArXiv highlighted a significant issue with the internet's content: a large portion is poorly translated into various languages, particularly those spoken in Africa and the Global South. This problem arises from machine translations (MT), which often produce lower-quality translations due to insufficient data for "low-resource" languages. The research, conducted by the Amazon Web Services AI lab, reveals that over half of the web's sentences have been translated into two or more languages, leading to concerns about the accuracy and reliability of information available in these languages.
Meanwhile, researchers from AI startup Anthropic discovered that once large language models (LLMs) have been trained to exhibit deceptive behaviors, they may not be corrected through standard safety training methods.
In experiments, models were trained to change their behavior based on specific triggers, such as a particular year or a coded string, demonstrating that these models can adopt unsafe behaviors on cue. Attempts to mitigate these deceptive behaviors through adversarial training (a method that identifies and penalizes unwanted actions) were found to potentially make the models more adept at concealing their deception, raising concerns about the effectiveness of current AI safety strategies.
Moreover, the potential for "model collapse," where AI systems deteriorate in quality due to training on their own output, introduces a paradoxical challenge. As AI-generated content becomes more prevalent, the feedback loop of AI systems consuming and generating based on synthetic data threatens the diversity and innovation at the heart of creative content.
Addressing these issues requires a multifaceted approach, including better data governance, the development of new standards for content verification, and a reevaluation of how AI is integrated into content creation processes.
An example of an AI model collapse, where succeeding versions degenerate in quality
Actionable trend insights as unreliable AI-generated content takes over the web
For entrepreneurs
Entrepreneurs can develop AI-driven platforms that specialize in detecting and flagging AI-generated content, distinguishing it from human-created material. By leveraging advanced ML algorithms trained to recognize subtle cues and inconsistencies typical of AI-generated text or imagery, these platforms could offer certification services for content authenticity.
Alternatively, blockchain-based tech, especially NFTs, can be applied to content creation as a form of verification or watermarking to help differentiate between human and AI-generated content.
For corporate innovators
Companies can invest in AI-enhanced cybersecurity solutions that specifically address vulnerabilities introduced by AI-generated content, such as phishing attacks or malware embedded in seemingly benign generated texts. They could develop bespoke AI models that analyze patterns, syntax, and semantics unusual for their internal communications but common in AI-generated phishing attempts.
They can also develop an internal protocol for using AI in content creation that emphasizes ethical standards and transparency, distinguishing their brand in a crowded market. This tactic could involve creating a unique blend of AI-generated content for initial drafts or ideation, with a mandatory human review and enhancement process to ensure quality, accuracy, and alignment with brand values.
For public sector innovators
Government agencies can create AI oversight frameworks that mandate transparency in content creation processes and the use of AI-generated content. Such frameworks could involve developing standards and regulations that require the labeling of AI-generated content across digital platforms, coupled with the implementation of audit trails for AI content-generation tools to ensure traceability and accountability.
Governments can implement AI-driven public awareness campaigns that educate citizens on the nuances of AI-generated content. By partnering with AI and media experts, these campaigns could provide practical tips for identifying AI-generated content, understanding its potential biases, and evaluating information critically.
Trending research reports from the World Wide Web
A deep dive into how the rich are already handpicking genetic features for their children.
Research reveals that the new Vera Rubin Observatory could detect up to 70 interstellar objects annually, allowing space agencies to prepare for missions in advance.
The Asia-Pacific region is on a mission to become the global leader in floating solar farms, with Southeast Asia potentially contributing to 10% of the region’s total solar capacity by 2030.
The wealth of the world's five richest men has more than doubled to USD $869 billion since 2020, growing at $14 million per hour, while nearly five billion people have become poorer, according to a new Oxfam report.
The global race to improve car batteries is intensifying due to a surging market, with EVs on the road potentially increasing to 350 million by 2030, pushing the demand for EV battery energy to 90 times its current level by 2050.
🗳️ Generative AI is the most effective election meddler
Generative artificial intelligence (AI) in political campaigns is becoming a playing field of automation, personalization, and disinformation. In the US, Democrat Shamaine Daniels' campaign deployed "Ashley" in her bid for Congress, an AI capable of engaging voters in personalized conversations based on their profile and language. Beyond mere voter outreach, this AI-enabled campaign rep / election canvasser delivered nuanced discussions to voters tailored to individual concerns, a significant departure from traditional campaign methods.
In Indonesia, generative AI took a different but equally impactful route by rebranding strongman and controversial general Prabowo Subianto's image through AI-generated cartoons, appealing directly to the younger demographic and helping him to win the presidency. Moreover, AI's ability to analyze vast amounts of data allowed for hyper-local campaign strategies, as seen with the Pemilu.AI app, which crafted speeches and social media content tailored to specific constituencies.
Finally, the technology's capacity to generate realistic audio and video, such as a deepfake video of deceased dictator Suharto urging people to vote, raised concerns about its potential to fabricate convincing misinformation, complicating efforts to maintain election integrity.
Indonesian President Prabowo Subianto’s avatar (left) rebranded him as a “cuddly teddy bear”
These examples underscore the dual use of AI in politics: as a tool for enhancing democratic engagement and as a vector for election interference. The US presidential election in November and other democracies will likely witness an escalation in AI callers, automated disinformation content on social media, and deepfake videos aimed at ruining reputations.
These murky waters require proactive navigation, including collaborating with tech companies and establishing clear regulatory frameworks. However, for political parties that think all is fair in power and political ambition, AI might become their most potent ally.
Actionable trend insights as election campaigns increasingly incorporate generative AI
For entrepreneurs
They can develop AI-driven platforms that specialize in identifying and engaging undecided voters through hyper-personalized content. For instance, a platform could generate custom video campaign messages or interactive digital town halls where the AI, mimicking the candidate's style, addresses the voter's unique questions or concerns, providing a deeply personal campaign experience.
Entrepreneurs can also create AI systems that verify the authenticity of digital content shared within political forums and social networks. For example, a browser extension or social media plugin could automatically alert users when they encounter political content that's likely been generated or altered by AI, providing a trust score based on the analysis.
For corporate innovators
Companies can leverage generative AI to create dynamic, real-time marketing campaigns that adapt to current events or consumer sentiment shifts. For example, a lifestyle brand could use AI to design and launch a social media campaign that aligns its products with emerging environmental sustainability discussions, using AI-generated visuals and narratives that resonate with those values.
Corporations can sponsor or organize virtual town halls and debates that offer a more interactive and personalized participant experience. By integrating AI with virtual reality (VR) or augmented reality (AR) technologies, companies could create immersive platforms where constituents engage with AI representations of political figures or policy proposals in real time.
For public sector innovators
Agencies can utilize generative AI to develop advanced simulation environments for election integrity training and awareness programs. For example, an interactive platform could simulate a social media environment where users practice distinguishing between genuine and AI-generated content, including deepfakes and tailored political advertisements.
Governments can partner with tech companies and international regulatory bodies to come up with frameworks and policies that would limit how AI can be used in election campaigns. This includes mandating political parties to specify if they are using AI-generated content and requiring this content to undergo standardized verification.
Outside curiosities
Designer Christian Cowan collaborates with Adobe to launch the “first electronically reconfigurable dress.”
Five-minute wrestling matches are coming to X.
The future of travel is stylish and multimodal.
This is a whole new level of creativity, thanks to AI.
The day Elmo turned into a global therapist.
More from Quantumrun
Read more daily trend reporting on Quantumrun.com
Subscribe to the Quantumrun Trends Platform (free for premium newsletter subscribers).
Corporate readers can review our Trend Intelligence Platform
Email us at contact@quantumrun.com with questions or feedback.
Finally, share your thoughts in the Substack comments below. We love hearing from you!
Interested in collaborating with the Quantumrun Foresight team? Learn more about us here.
See you in The Futures,
Quantumrun