The Future of Shopping? AI + Actual Humans.
AI has changed how consumers shop by speeding up research. But one thing hasn’t changed: shoppers still trust people more than AI.
Levanta’s new Affiliate 3.0 Consumer Report reveals a major shift in how shoppers blend AI tools with human influence. Consumers use AI to explore options, but when it comes time to buy, they still turn to creators, communities, and real experiences to validate their decisions.
The data shows:
Only 10% of shoppers buy through AI-recommended links
87% discover products through creators, blogs, or communities they trust
Human sources like reviews and creators rank higher in trust than AI recommendations
The most effective brands are combining AI discovery with authentic human influence to drive measurable conversions.
Affiliate marketing isn’t being replaced by AI, it’s being amplified by it.
How new Xe driver enhancements unlock better performance and scalability for AI workloads
Linux continues to be the backbone of modern AI infrastructure—and Intel is doubling down on that reality. With Linux Kernel 7.0, Intel has introduced a major upgrade to its Xe GPU driver: Multi-GPU Shared Virtual Memory (SVM).
This change may sound technical, but its impact on AI workloads, performance, and developer productivity is significant.
Shared Virtual Memory allows CPUs and GPUs to access the same memory space without explicit copying. When extended across multiple GPUs, SVM enables:
Seamless memory sharing across GPUs
Reduced data-copy overhead
Simpler programming models
Faster execution for data-intensive workloads
For AI and machine learning, where massive datasets are constantly moved between accelerators, this is a big deal.
Why this matters for AI on Linux
AI workloads are increasingly multi-GPU by design—especially for training large models. Without efficient memory sharing, performance suffers.
With Multi-GPU SVM:
Models scale more efficiently across GPUs
Memory bottlenecks are reduced
Latency drops for large tensor operations
Developers spend less time managing memory manually
This brings Intel’s Linux GPU stack closer to the needs of modern AI pipelines.
What’s new in the Intel Xe driver
The updated Xe driver in Linux kernel 7.0 introduces:
Improved GPU-to-GPU memory access
Better coordination between CPU and multiple GPUs
Foundations for more advanced AI and HPC workloads
Stronger alignment with open-source accelerator frameworks
These improvements help position Intel GPUs as more competitive options for AI workloads on Linux.
Why open source makes this important
Unlike proprietary stacks, Intel’s Linux GPU work happens largely in the open.
This means:
Faster community feedback
Easier integration with AI frameworks
Greater transparency for developers
Long-term stability for enterprise deployments
For Linux users, these kernel-level improvements arrive without vendor lock-in.
What this means for developers and sysadmins
For developers:
Simpler multi-GPU programming
Better performance with fewer code changes
More efficient AI training and inference
For system administrators:
Improved scalability on Intel GPU hardware
Better utilization of multi-GPU servers
Stronger Linux-native AI infrastructure
This is especially relevant for organizations exploring alternatives in the rapidly evolving AI hardware landscape.
Final Thoughts
Intel’s addition of Multi-GPU SVM to the Xe driver in Linux Kernel 7.0 is a clear signal:
Linux-based AI is becoming more open, more scalable, and more hardware-agnostic.
As AI workloads continue to grow in size and complexity, kernel-level innovations like this will quietly power the next generation of performance gains—without changing how developers work.

