- AI Adopters
- Posts
- Alibaba’s next AI video model is here
Alibaba’s next AI video model is here
Plus, Amazon’s Alexa is about to upgrade.

Welcome back.
Alibaba just made AI video generation more accessible. With Wan 2.1 open-sourced, creators and developers can experiment with advanced video models without relying on proprietary tools.
But can it compete with OpenAI’s Sora? Let’s take a look.
In today’s release:
1. Alibaba’s AI video generation models
2. Amazon unveils Alexa+
3. Meta announces Aria Gen 2

Alibaba open-sources Wan 2.1 AI video models
Alibaba has released Wan 2.1, a suite of open-source AI video generation models now available on Hugging Face.
These models promise high-quality video generation, with capabilities ranging from text-to-video (T2V) to image-to-video (I2V) conversion. The smallest variant can even run on consumer GPUs, making advanced AI video creation more accessible.
Model variants: Includes T2V-1.3B, T2V-14B, I2V-14B-720P, and I2V-14B-480P
Efficiency: T2V-1.3B runs on an Nvidia RTX 4090, generating a 5-second 480p video in ~4 minutes
Comparison: Outperforms OpenAI’s Sora in scene consistency, object accuracy, and spatial positioning
The Wan 2.1 architecture uses an advanced 3D causal VAE, improving memory efficiency and enabling unlimited-length 1080p video generation without sacrificing quality. Alibaba has released these models under the Apache 2.0 license, allowing broad research use while placing restrictions on commercial applications.
How can you take advantage of this?
The Hugging Face release makes it easy to explore AI-powered video content, and for businesses, Alibaba’s models could provide a strong alternative to proprietary tools like OpenAI’s Sora, especially for projects requiring high accuracy and spatial control. Wan 2.1 is also available to use on Krea AI.

Amazon unveils Alexa+: a smart AI voice assistant
Amazon has introduced Alexa+, an AI-enhanced version of its popular voice assistant, promising more personality, better contextual awareness, and humanlike conversation flow.
Unlike the original Alexa, this new version will require a subscription, free for Prime members but costing $19.99 a month for others.
AI features: Alexa+ can plan study schedules, text contacts, call Ubers, and analyze video feeds
AI models: Alexa+ runs on Amazon and Anthropic’s AI models, but it can also leverage Amazon’s Nova
Timeline: The new Alexa will be rolling out to users in the US next month
The assistant will also remember personal details like dietary preferences, allergies, and past interactions, offering a more personalized experience. It will also sense emotions based on video input, as demonstrated when it analyzed a live crowd’s mood during the unveiling event.
How can you take advantage of this?
If you're a Prime member, you get Alexa+ at no cost. For businesses, this signals a shift toward paid AI voice services, meaning companies may soon explore similar AI integrations to boost engagement and efficiency.

Meta announces Aria Gen 2: AI smart glasses for research
Meta has unveiled Aria Gen 2, an upgraded version of its research-focused smart glasses.
With an improved sensor suite, on-device AI processing, and enhanced usability, Aria Gen 2 aims to push the boundaries of contextual AI and egocentric computing.
Advanced sensors: Includes SLAM cameras, eye tracking, spatial microphones, IMUs, and a PPG heart rate sensor
On-device AI: Speech recognition and hand tracking are processed locally using Meta’s custom silicon
Expanded use cases: Supports AI-driven robotics, accessibility tech, and smart vehicle integration
Meta’s first-generation Aria glasses have already been used in projects like BMW’s AR/VR research. Envision, a company dedicated to creating solutions for people who are blind or have low vision, will now use the second-generation glasses to improve indoor navigation for these individuals.
How can you take advantage of this?
Researchers and developers can apply to access Aria Gen 2 to explore AI-driven navigation, robotics, and augmented reality. With real-time spatial awareness and enhanced AI processing, these glasses offer a glimpse into the future of human-AI interaction and next-gen computing platforms.
OTHER AI NEWS
ElevenLabs introduces Scribe: A new speech-to-text model that can transcribe speech in 99 languages with top accuracy. It’s available via API and the ElevenLabs dashboard.
Microsoft unveils Phi-4 models: Microsoft’s new Phi-4 multimodal and Phi-4-mini rival GPT-4o, excelling in speech, vision, and text. Available now on Azure, Hugging Face, and NVIDIA’s API Catalog.
Inception debuts with a new type of AI model: Startup Inception unveils diffusion-based language models, promising 10x faster AI at lower costs. Backed by research, its DLMs challenge traditional LLMs in efficiency.
Pika releases Pika 2.2: You can use Pika 2.2 to generate 10-second-long videos in 1080p. It also lets you add Pikaframes, key frame transitions anywhere from 1-10s.
POPULAR AI TOOLS
Basalt: Integrate AI into your product in seconds.
OpenArt Consistent Characters: Craft your characters and stories with ease.
Claude Code and Claude 3.7 Sonnet: Anthropic’s most intelligent model to date.
HabitGo: Build better habits for a happier, more productive you.
Pinch: Immersive real-time voice translation for video conferencing.
AND THAT’S A WRAP
Thank you for reading!
If you found this email useful, share it with a friend or colleague who also loves AI.
Also, drop me a follow on Twitter/X for more AI and tech updates.
I will talk to you soon!
Mike