
The Tarot Reveals:
Morality > Money
For nearly two decades, I have built and managed a successful digital media agency, crafting immersive digital experiences for some of the world’s top entertainment and e-commerce brands. From the dawn of Web 2.0 to the rise of Web 3.0, I have witnessed—and at times even predicted—the transformative shifts in technology, social behavior, and commerce. I have helped companies build their online presence, optimize their digital strategies, and, yes, harness the power of social media to engage, convert, and retain audiences. But today, I find myself at an ethical crossroads, one that forces me to step back and reevaluate my role in this ecosystem.
A Wild Frontier
In the early days of social media, it felt like a game—an exciting puzzle to solve. We were digital architects, figuring out how the platforms worked, tweaking headlines and hashtags, A/B testing content, reverse-engineering engagement tactics, and riding the waves of new algorithm changes before they were widely understood. There was a thrill in knowing how to hack growth, how to make content go viral, how to bend the rules of these emerging platforms to our advantage. Whether it was boosting organic reach with clever timing, exploiting loopholes in ad bidding, or cracking the code of what made a post sticky, it felt like we were pioneers of a new digital frontier.
At the time, it all seemed harmless, even exhilarating—a kind of alchemy where data and creativity converged. Beyond service to our clients, we weren’t questioning who really benefited from our ability to drive engagement or what happened when the algorithms evolved beyond our control. But over time, it became clear that we weren’t just playing a game; we were feeding a system that was becoming more sophisticated, more manipulative, and ultimately, more dangerous. Now, as I look at the trajectory from Web 2.0 to Web 3.0 and beyond, I realize that we weren’t hacking the platforms—the platforms were hacking us.
we weren’t hacking the platforms—the platforms were hacking us.
Side-Effects May Include…
Technology has always been a double-edged sword. It can empower, connect, and innovate—but it can also manipulate, exploit, and control. The shift from Web 2.0 to Web 3.0 was supposed to decentralize power, giving individuals control over their data and interactions. Instead, what we have seen is a consolidation of power in the hands of a few tech giants – a dystopian evolution of the very tools we once believed would democratize information and opportunity. And now, as the framework of Web 4.0 is being defined, I am increasingly convinced that we are on the precipice of an era defined by surveillance, manipulation, and societal control.
As I witness the continued weaponization of data, the rise of far-right political alliances within the tech industry, and the blatant abandonment of fact-based discourse in favor of algorithmic echo chambers, I can no longer, in good conscience, participate in the machinery that fuels this ecosystem. That is why I have made the difficult decision to discontinue social media management services at my agency. This is not a decision driven by business strategy, but by moral imperative.
From Connection to Control
Social media was once heralded as a revolutionary tool for connection and community. Platforms like Facebook, Twitter (now X), and Instagram promised to bring people together, give voices to the marginalized, and enable free expression. But over time, these platforms have evolved into something far more insidious.
The digital world has long been shifting toward a model where user behavior is not just observed but actively shaped. The Facebook-Cambridge Analytica scandal serves as a prime example of this evolution. In 2018, it was revealed that Cambridge Analytica, a political consulting firm, had harvested personal data from millions of Facebook users without their consent, using it to create psychographic profiles and micro-target individuals with personalized political ads. This operation allegedly influenced key elections, including the 2016 U.S. presidential race and the Brexit referendum. (Source: The Guardian). Facebook, now Meta, was fined $5 billion by the Federal Trade Commission in 2019, but the underlying issue remained: our data is a currency, traded and exploited with little oversight. (Source: FTC)
The Facebook-Cambridge Analytica scandal underscored how data collection extends beyond passive surveillance into behavioral manipulation. Tech companies and political organizations increasingly rely on AI-driven profiling to predict, influence, and direct human actions—whether in consumer choices, voting behavior, or ideological leanings. Today, platforms use algorithmic curation to filter content and algorithmic radicalization to amplify certain narratives while suppressing others, subtly nudging users toward specific viewpoints or engagement patterns. Reports from both Harvard Business Review and VOX: Recode highlight how algorithm-driven content delivery often leads to the reinforcement of biases and echo chambers, which can be exploited for political and commercial gains. (Source: Vox).
As AI, biometric tracking, predictive analytics, and sentiment analysis continue to evolve, the mechanisms of influence will only become more sophisticated. These methodologies have turned users into psychological case studies, where every scroll, click, and hesitation is mapped and used to refine targeted messaging designed to influence opinions and behaviors. This is not just advertising; it is psychological engineering.
This is not just advertising; it is psychological engineering.
Rightward Tech
Adding to my concerns is the increasing alignment of major tech leaders with far-right political movements. In recent years, the tech industry has undergone a noticeable rightward shift, with influential figures embracing authoritarian-leaning rhetoric, conspiracy theories, and initiatives that blur the lines between corporate power and government control.
Of course, the most concerning example is Elon Musk’s involvement with the Department of Government Efficiency (DOGE) and his endorsement of far right-wing parties in Europe (Source: MSNBC), but Jeff Bezos’ interactions with convicted felon and current president, Trump, have also raised concern regarding the political leanings of The Washington Post and Amazon’s business dealings. (Source: WSJ) Peter Thiel, one of the earliest investors in Facebook, has been a vocal supporter of nationalist policies and has funneled money into political causes that promote authoritarian governance (Source: The Times of India). And Mark Zuckerberg, himself, made the shift to completely eliminate fact checking on Meta’s collective platforms. (Source: AP News) This shift is not just about individual political beliefs; it has significant implications for the future of technology. When those who control the platforms that mediate public discourse align themselves with ideologies that reject democracy, fact-checking, and objective journalism, we must ask ourselves: What kind of digital world are we building? What happens when these platforms actively suppress dissenting voices while amplifying misinformation?
The tech industry’s embrace of authoritarian figures and corporate control over public policy represents a dangerous trend. The rise of billionaire-backed misinformation, combined with AI-powered propaganda tools, has made it increasingly difficult to separate fact from fiction. As the next generation of internet technologies approaches, the question is not whether big tech is leaning right, but whether it is actively shaping a future where truth is negotiable, and power is concentrated in the hands of a select few.
Web 4.0: The Dawn of Digital Suppression
The horizon that I see ahead is a world of Web 4.0: where artificial intelligence, blockchain, IoT, and biometric data tracking are converging to redefine digital interactions. Unlike previous iterations of the web, which expanded access and user empowerment, Web 4.0 seems poised to prioritize tracking, suppression, and control.
Governments and corporations are already exploring AI-driven censorship models that detect and remove content deemed “problematic” in real time. In China, the government has deployed AI-powered surveillance that not only censors content but also predicts and suppresses potential dissent before it occurs. (Source: The New York Times). In the U.S., social media platforms have been scrutinized for their proliferation of information disorder, with AI making decisions about what information remains visible or gets suppressed. (Source: National Library of Medicine).
Predictive policing technologies, fueled by big data, are being deployed in cities worldwide, disproportionately targeting marginalized communities. Studies have shown that predictive policing algorithms reinforce racial biases, leading to over-policing in historically disadvantaged neighborhoods. (Source: MIT Technology Review).
Meanwhile, the rise of smart devices, from wearable health trackers to AI-powered home assistants, has turned daily life into a continuous data-harvesting operation. Apple’s latest Vision Pro headset, for example, collects biometric and environmental data, prompting concerns about how this sensitive information could be used for profiling. (Source: Forbes). The proliferation of AI-powered surveillance tools in workplaces is another growing concern, with companies using these technologies to monitor employees’ emotions and productivity levels. (Source: The Guardian).
These examples illustrate that Web 4.0 is shaping up to be less about user empowerment and more about a fine-tuned system of behavioral tracking and control. If we continue down this path without regulation or ethical intervention, we risk normalizing a future where privacy, autonomy, and free expression are sacrificed for efficiency, profit, and power.
The Move Away from Social Media
For years, I have justified my participation in the social media ecosystem as a necessary business strategy. I remain convinced that our agency’s work—helping brands engage authentically and transparently—was different from the manipulative tactics employed by bad actors. But the reality is that by continuing to participate in a system that is so fundamentally broken, we are inseparably complicit in its harms.
Walking away from social media management services is not a decision I take lightly. It means turning down lucrative contracts and shifting our business model. But it is a decision that aligns with my values and my responsibility as a digital leader.
Morality vs. Business (spoiler: Morality wins)
Every business owner, marketer, and technologist (…and individual!) must grapple with this ethical dilemma in their own way. Some will argue that abandoning social media is impractical or even counterproductive. Others will choose to stay and fight for a better, more ethical digital ecosystem from within. There is no singular right answer.
For me, however, the line has been crossed. I do not believe you can justify profiting from a system that is actively harming society. My hope is that by speaking out, I encourage others to critically evaluate their own role and participation in this evolving landscape.
The next phase of the internet is being built right now. We have a choice: do we continue down the path of data exploitation, political manipulation, and digital suppression? Or do we take a stand, demand better, and build a future where technology serves humanity rather than controls it?
I have made my choice. What will yours be?