Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
Story Views
Now:
Last hour:
Last 24 hours:
Total:

The Two Faces of AI: Veejay Madhavan on Ethics, Explainability, and the Human Cost of Transformation

% of readers think this story is Fact. Add your two cents.


Veejay Madhavan’s (VJ) main concern about AI’s impact on the global workforce and corporate responsibility isn’t whether it’s coming.

It’s how fast, and at what cost.

As founder of OulbyZ and a former CXO with decades of experience managing regional teams across Asia, Veejay has watched the corporate world absorb every wave of technological change, from outsourcing to automation to cloud transformation. 

But AI, he warns, is different. Not because of its capability, but because of the way it exposes a deeper institutional flaw: the tendency to rush toward tools while ignoring people.

“Everybody’s jumping on this bandwagon with AI,” he says. “The question now is not whether you have AI or not… If you haven’t adopted AI, then you’re already behind the curve in 2026. But the point is this: how do you integrate AI responsibly with full explainability?”

What’s getting lost in the rush is clarity. Not just about strategy, but about impact. “You should look back into the rearview mirror and say, did we train our people for all this? We knew it was going to come. And did we prepare our people? We didn’t.”

Why ‘Explainability’ is the Real AI Crisis

In Veejay’s view, the biggest ethical blind spot in AI isn’t bias. It’s opacity.

“Everybody talks about integrating AI responsibly,” he says. “One big aspect that’s missing is explainability.”

Explainability, in this context, is about employees being able to understand how their performance is being evaluated—and whether the systems doing that evaluation are fair.

“If I’m an employee… how is it [AI] measuring my productivity? Screen time? Is it focusing on my task? Now when you’re doing that, there’s this issue of surveillance.”

And that, he adds, breeds mistrust, especially if AI is introduced without proper communication. “Are employees comfortable with an AI algorithm surveilling their work? What if I was not at my laptop but I was in a meeting? Is it looking into my calendar to see whether I attended that meeting?”

It’s not just a technical issue; it’s a psychological one. “You have to explain, you have to brief to people, you have to get people comfortable that this is not a policing tool,” he says. “This is an enabler.”

But explainability, he emphasizes, must be proactive, not reactive. “Even those people we need to transition out—how are we supporting them? We just say, okay, AI does this, we’ll throw this in, we’ll just clear this bunch of people out. Full stop. Pay them off. It is quite sad to see organizations treat people… without looking at the psychological aspect of things.”

Retraining, resistance, and responsibility

While most companies today talk about “reskilling” and “redeployment” as if they were logistical boxes to tick, Veejay is blunt: those terms are only meaningful when consent and capability are factored in.

“You can only be retrained if you are willing,” he says. “Because a lot of companies say we’ll retrain our staff and redeploy them. Right. But the question is, is the staff ready for retraining and redeployment? And is the staff the right staff for the redeployment into the new role?”

He doesn’t dismiss the intention behind these policies, but he questions the planning.

“If I took a call center agent and said, let’s retrain him with operational risk capabilities and put him as operational risk… or let’s retrain him as an AI ethics officer… but then he has no idea what AI is. And he’s not prepared to even do that job.”

In his view, responsible AI adoption must include career planning as a core pillar. “That needs to happen as AI is being brought into the business. You’ve got to sit down with people who are going to be likely affected and say, well, where do you want your career to go from here?”

This, he says, is a strategic duty, not just an HR one. “You may retrain for a new role in your organization, or you may support an external retraining so that that person can actually prepare to transition their life outside the organization. Because it’s impossible to keep all your staff.”

But the critical mistake? Treating retraining like a PR shield instead of a real solution. “We are running a business. We’re not running a charitable organization,” he says. “So there will be people who will be transitioned. But we’ve seen lessons—how people are transitioned just by ‘go off’ and just lay off like that without even any conversation. And that’s bad.”

Surveillance or support?

Beyond the fear of job loss, Veejay points to a subtler shift AI is already triggering, one that changes how authority and performance are interpreted inside companies.

He sees AI not as a replacement for managers, but a tool that reveals their limitations.

“AI can now do… measure productivity, measure performance,” he explains. “It becomes a manager’s sidekick.”

But while AI excels at pattern recognition, it fails at something essential. “AI cannot do reasoning with emotion,” he says. “So that’s where the manager’s role comes in.”

This, he argues, demands a new set of skills from managers, ones they’re rarely trained for. “Do they know exactly how to craft this message back to the employee as feedback?” he asks. “There’s retraining that has to happen… even at the top level.”

He’s especially critical of how AI is rolled out in silos. “It’s always the back office operations, including HR, that gets all this AI first because they create less impact to the business,” he says. “But all the functions are interdependent. The other functions need to be retrained, too.”

In Veejay’s view, companies are still underestimating the cultural ripple effects of automation.

“We say, oh, okay, AI does this, we’ll just clear this bunch of people out… and that’s it. You pay out money, so you think that’s the end of it,” he says. “But these guys have given up family time. They worked on weekends. They’ve paid an implicit cost. And then the company dumps them like this.”

Gen Z sees through the noise

While companies often assume that Gen Z will be naturally aligned with AI, Veejay urges caution. Their familiarity with technology doesn’t mean they trust it (or the people implementing it).

“There’s this misconception that Gen Z is very good at AI,” he says. “Gen Zs know the power of AI, but they do not know how the AI algorithms work.”

What they do know, he adds, is when something feels off. “They’re big users… so they know what tools… the knowledge of AI,” he says. “But it doesn’t mean that you know how to set up AI.”

That disconnect creates an unexpected vulnerability. As AI enters their workplaces, Gen Zs aren’t equipped to lead those changes, nor are they given the clarity to understand them.

“They’ll be sitting there like, so what do you want me to do here? How is this going to change my career trajectory as a manager?” he says. “Am I going to get replaced?”

And unlike older generations, they won’t wait around to find out. VJ refers to this as the “Zoomer Bailout.”

“They bail out,” Veejay says. “The Zoomers are looking at this and they’re saying, wait a minute, I think I’d better bail now.”

“They’ve seen the dark side of it,” he says. “So if I don’t know how this AI is going to work, how do I know what meaning it’s going to bring to my work? And whether this is not a step to get rid of me?”

Leadership’s blind spot

For Veejay Madhavan, the most urgent problem isn’t AI itself. It’s the leadership’s reluctance to talk about it clearly, early, and often.

“Leadership. Stop rushing into your things,” he says flatly. “If you were going to talk about your strategy, you should have started communicating your strategy with your workforce right now.”

Instead, many leaders wait until transformation becomes unavoidable, and by then, trust is already broken.

“We prepare a 12-month calendar,” he says. “That means work starts from day one. But we say, no, no, no—our people are so… they have to focus on year-end, and we’ll talk about it in January. Really?”

This disconnect, he asserts, leads to chaos that’s entirely preventable. “If I want to do this communication in January, I would be now working on my comms straight away,” he says. “I’ll be mastering and polishing my comms now in November. I will be dropping hints along the way.”

Without that kind of runway, organizations stumble out of the gate. “The finger pointing, the chaos, the noise, the fog appears in January,” he says. “No one has any idea.”

And Gen Z notices. “They are watching. If a senior person is treated like this, how about me, the young Gen Z who just entered this company? How would I be treated?”

A call for ethical AI integration

Veejay’s solution isn’t just better messaging. It’s structural: rethink AI integration as an exercise in clarity, capability, and consent.

“It all starts with what you’re planning to do in the strategy,” he says. “Who have we got to take us to where we want to go?”

He argues that most companies don’t actually know their people well enough to make those calls. “Nine out of 10 times, HR doesn’t know,” he says. “Because what we do in our people assessment is always surface level.”

To fix that, he’s developed two diagnostic tools that triangulate clarity within both individuals and organizations. “One is looking in the organization and whether there’s alignment,” he says. “The other is looking at individual clarity.”

And together, those insights are actionable. “What have you got that is really good? What have you got that you really need to work on?” he explains. “If these two—organizational and individual clarity—don’t align, you cannot build your strategy. Simple.”

This goes deeper than workforce planning. It’s about respect.

“We have not done enough with people to say who do we want to lay off?” he says. “Could this engineer be redeployed somewhere else? It’s good talent and good IP loss.”

That knowledge, he believes, should be the starting point for any AI adoption. “Since you’re already doing a large painful exercise with AI, don’t forget all the interdependent elements to it,” he says. “Might as well do it one shot with everything, right?”

The two faces, and the one choice left

Throughout the conversation, Veejay keeps returning to the same theme: AI doesn’t expose a company’s technological readiness; it exposes its humanity.

“AI will bring two faces,” he says. “Like all automation and technology, there is a benefit, but there’s also this human element you have to deal with, which needs to be treated with care.”

It’s not enough to invest in systems. Leaders must invest in trust. That begins with how decisions are made and how clearly they’re explained.

“You cannot hide under the garb of bureaucracy,” he says. “Financial numbers take care of themselves when your processes and people are aligned.”

And above all, leadership must accept that real responsibility comes with emotional cost. “Sometimes the right thing is not the popular thing,” he says. “But we are far better than that.”

For Veejay, the question AI presents isn’t what companies will gain. It’s what—and whom—they are willing to lose.

 



Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world. Anyone can join. Anyone can contribute. Anyone can become informed about their world. "United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.


LION'S MANE PRODUCT


Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules


Mushrooms are having a moment. One fabulous fungus in particular, lion’s mane, may help improve memory, depression and anxiety symptoms. They are also an excellent source of nutrients that show promise as a therapy for dementia, and other neurodegenerative diseases. If you’re living with anxiety or depression, you may be curious about all the therapy options out there — including the natural ones.Our Lion’s Mane WHOLE MIND Nootropic Blend has been formulated to utilize the potency of Lion’s mane but also include the benefits of four other Highly Beneficial Mushrooms. Synergistically, they work together to Build your health through improving cognitive function and immunity regardless of your age. Our Nootropic not only improves your Cognitive Function and Activates your Immune System, but it benefits growth of Essential Gut Flora, further enhancing your Vitality.



Our Formula includes: Lion’s Mane Mushrooms which Increase Brain Power through nerve growth, lessen anxiety, reduce depression, and improve concentration. Its an excellent adaptogen, promotes sleep and improves immunity. Shiitake Mushrooms which Fight cancer cells and infectious disease, boost the immune system, promotes brain function, and serves as a source of B vitamins. Maitake Mushrooms which regulate blood sugar levels of diabetics, reduce hypertension and boosts the immune system. Reishi Mushrooms which Fight inflammation, liver disease, fatigue, tumor growth and cancer. They Improve skin disorders and soothes digestive problems, stomach ulcers and leaky gut syndrome. Chaga Mushrooms which have anti-aging effects, boost immune function, improve stamina and athletic performance, even act as a natural aphrodisiac, fighting diabetes and improving liver function. Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules Today. Be 100% Satisfied or Receive a Full Money Back Guarantee. Order Yours Today by Following This Link.


Report abuse

Comments

Your Comments
Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

MOST RECENT
Load more ...

SignUp

Login