FRAUD

Samarth Wadhwa, Product Leader — Operationalizing Generative AI, Scaling Cloud-Native Platforms, Driving Enterprise Automation, Democratizing Data, Empowering Cross-Functional Innovation

In this compelling interview, we sit down with Samarth Wadhwa, a forward-thinking product leader at NetApp who is redefining the future of enterprise technology at the intersection of AI, DevOps, and cloud operations. From launching AISA, a Generative AI-powered support assistant, to pioneering Text-to-SQL capabilities that democratize data access, Samarth’s work exemplifies how AI can drive real, measurable impact across the organization. With deep experience in both startup and Fortune 50 environments, he brings a rare blend of technical acuity and strategic clarity, showing how to scale innovation without sacrificing operational efficiency. His insights offer a blueprint for how today’s enterprises can turn AI from a buzzword into a business-critical advantage.

Explore more interviews here: Nilutpal Pegu, Vice President, Global Head of Digital — Balancing Digital Innovation in Pharma, AI in Rare Diseases, Leadership in Transformation, Data Science Pitfalls, E-Commerce, and Future Trends

How do you approach taking a product from ideation to mass adoption, and what are the key challenges at each stage?

When taking a product from ideation to mass adoption, my approach is grounded in deeply understanding the problem space before rushing to solutions. I start by spending time with customers, sales teams, and support to uncover real pain points, what’s broken, what’s inefficient, and what users are trying to do but can’t. That early phase is all about pattern recognition: are we seeing the same friction across different customer segments? If yes, that’s when I start validating the opportunity with qualitative and quantitative signals.

Once I have a clear problem definition, I move into solution design. At NetApp, for example,  we developed an API-first integration platform from scratch. The key was not just building for today’s needs but designing for extensibility, anticipating how customers will evolve, and making sure we don’t box ourselves in. We worked closely with engineering to break the vision into milestones, iterating fast and aligning our internal teams around value delivery, not just feature output. That’s where good PRDs and roadmap clarity matter.

Bringing the product to market is another critical phase. It’s not enough to build a great product; you need to make it easy for others to adopt it. That involves enabling sales, refining messaging, and often simplifying packaging. At Harness, for instance, we built GitOps as a Service from 0-to-1 and worked hand-in-hand with marketing, customer success, and even analysts to craft the right narrative. We also implemented a monthly release cadence, which helped us build customer trust and ship faster.

The final leg, scaling and mass adoption, is where a lot of companies stumble. You have to operationalize feedback loops and obsess over metrics. How are users interacting with the product? What’s the time to value? At NetApp, we launched a Generative AI assistant, AISA, and we tracked how much it reduced time-to-resolution and how it impacted CSAT scores. That data helped us not only iterate but also prove ROI internally and to customers. And frankly, some of the best product ideas come from watching how customers misuse or extend your product; that’s a signal they want more.

At every stage, there are challenges: getting alignment early on, balancing scope and speed during build, standing out in a noisy market during launch, and evolving quickly enough during scale. But if you’re listening closely to your users and staying focused on business outcomes, those challenges become opportunities to deepen product-market fit. That’s the mindset I bring to every product I’ve built.

What are the biggest opportunities and risks for enterprises as AI and automation continue to reshape industries?

AI and automation are fundamentally changing how enterprises operate, and I see both immense opportunity and some serious risks in that transformation. On the opportunity side, AI is creating real value in places that were historically bottlenecks—support, analytics, and decision-making, to name a few. For example, at NetApp, I led the development of AISA, a generative AI-powered support assistant that significantly reduced time-to-resolution for customer queries. That’s not just a support win, it’s a revenue enabler, because quicker resolution builds trust and improves retention. We also worked on a Text-to-SQL solution,   which essentially democratized access to data by allowing non-technical users to generate complex queries using plain English. That kind of capability transforms how business users make decisions.

But the biggest opportunity, in my view, is around efficiency at scale. Automation removes repetitive overhead and frees up teams to focus on innovation. At Harness, we did this with GitOps and Continuous Delivery automation, helping companies streamline their software delivery process. When done right, AI and automation become strategic levers, not just cost savers, but value creators.

Now, on the risk side, one of the biggest challenges is over-automation without understanding context. Just because you can automate something doesn’t mean you should. Poorly implemented AI can degrade user experience or lead to critical errors, especially in regulated industries. There’s also the risk of bias in AI models, and enterprises need strong governance frameworks to manage that responsibly. I’ve seen that firsthand while integrating LLMs and RAG pipelines into enterprise platforms; data privacy, model hallucination, and explainability are real concerns.

Another risk is internal resistance. AI and automation often imply job displacement, and unless leaders are transparent and intentional about reskilling and communication, it can trigger cultural backlash. The key is not just deploying AI for productivity, but doing it in a way that augments human potential instead of replacing it.

Ultimately, I think the winners in this space will be the enterprises that treat AI not as a feature, but as a foundation. That means building with trust, aligning use cases with measurable outcomes, and embedding AI deeply into both product and process. The potential is massive, but only if approached with both ambition and responsibility.

As a Product Leader at NetApp, how do you align multidisciplinary teams to drive complex projects forward effectively?

Driving complex projects forward really comes down to clarity, context, and continuous communication. At NetApp, I’ve led several high-impact initiatives, like the development of our cloud-native iPaaS platform and AISA, our Generative AI support assistant—and none of those would have been successful without tight alignment across engineering, sales, support, and marketing.

The first thing I focus on is establishing a shared understanding of why we’re building something. It’s easy for teams to get siloed into execution mode, but when you take the time to connect the dots between customer pain points, business goals, and the product vision,  alignment becomes much easier. I typically kick off initiatives with a narrative that explains the opportunity, the stakes, and the customer impact, not just a feature list. That narrative helps every team, from developers to customer success, understand their role in delivering value.

From there, it’s about breaking the vision down into actionable milestones and creating the right collaboration rhythms. I rely heavily on agile practices, but with a strong emphasis on cross-functional checkpoints. For instance,   during the rollout of AISA, we held weekly rage sessions with engineering, UX, and customer support, so we could iterate quickly based on real user feedback. I also established a New Product Introduction checklist across CloudOps to ensure consistency in delivery and readiness across functions, sales enablement, support documentation, go-to-market, and everything.

Of course, misalignment can still creep in, especially in global teams with competing priorities. That’s where I find transparency and data are key. I openly share customer insights, adoption metrics, and usage patterns with all stakeholders, so the decision-making is grounded in facts, not opinions. And I make sure to celebrate progress visibly, highlighting team wins, recognizing contributions across disciplines—because momentum is as much emotional as it is operational.

Ultimately, aligning multidisciplinary teams is about making everyone feel invested in the outcome. If people understand the purpose, see the progress, and feel ownership, they show up differently, and that’s what drives complex initiatives across the finish line.

How do you see the fields of CloudOps and DevOps evolving in the next five years, and what skills will be most critical for professionals?

CloudOps and DevOps are both evolving rapidly, and I believe the next five years will bring a shift from infrastructure-centric operations to intelligence-driven, autonomous systems. We’re already seeing that trend with the adoption of GitOps, infrastructure-as-code, and policy-as-code becoming table stakes-but what’s coming next is even more transformative.

CloudOps, in particular, is moving toward greater abstraction. As multi-cloud environments become more complex, enterprises won’t want to manage individual services or vendors—they’ll want unified operational layers that are API-first, policy-driven, and AI-augmented. That’s what we focused on at NetApp when building our iPaaS platform and AI-powered support systems. It’s about giving teams a way to manage complexity without getting buried in it.

DevOps is also evolving—from pipelines and automation scripts to platform engineering and developer experience. We saw that at Harness with GitOps as a Service, where customers wanted not just tooling, but end-to-end, opinionated workflows that could scale securely across large organizations. The future of DevOps lies in building reusable, scalable platforms that empower developers without making ops teams a bottleneck.

In terms of skills, I think adaptability will be more important than any specific tool. That said, professionals should invest in three key areas: First, a solid grasp of cloud-native technologies-Kubernetes, Terraform, and CI/CD frameworks. Second, a strong foundation in automation, including GitOps principles and observability tooling. And third, and maybe most importantly, data literacy and AI integration skills. Whether it’s working with LLMs, building telemetry pipelines, or optimizing models for operations, Al is becoming embedded in the DevOps workflow.

Soft skills also can’t be overlooked. The ability to collaborate across teams, understand product and customer context, and communicate trade-offs will differentiate great engineers from good ones. As the lines blur between DevOps, CloudOps, and AlOps, it’ll be the professionals who can move fluidly across domains—and think in systems, not silos—who will lead the next wave of innovation.

What strategies have you found most effective in scaling operational efficiency while maintaining innovation within an organization?

One of the biggest misconceptions I’ve seen is that operational efficiency and innovation are at odds with each other. In reality, if you’re thoughtful, they can reinforce one another. At both NetApp and Harness, I’ve found that the most effective strategy is to design systems and processes that reduce friction teams can focus more on solving real problems and less on navigating internal complexity.

For example, at Harness, we implemented a structured monthly release cycle across our Continuous Delivery teams. That shift alone significantly improved velocity and predictability, which freed up engineering time that had previously been consumed by context switching and fire drills. By streamlining release management, we created more room for experimentation because teams weren’t constantly reacting-they could plan, prototype, and iterate.

At NetApp, we took a similar approach with our CloudOps initiatives. I introduced a New Product Introduction checklist that aligned engineering, product, sales, and support on what a “complete” product delivery looked like. That standardization didn’t slow us down—it accelerated go-to-market because there were fewer surprises late in the process. Everyone knew what was expected and could focus their energy on delivering value rather than cleaning up misalignment.

Another important lever is automation. I’m a big believer in automating anything repeatable but not value-differentiating. Whether it’s setting up infrastructure with Terraform or streamlining support workflows using Al tools like AISA, the goal is to eliminate toil so teams can spend more time on strategic work. And automation doesn’t just improve efficiency-it improves morale, because people feel like they’re doing meaningful work, not just chasing tickets.

Finally, I think a culture of data-driven prioritization is key. Innovation often fails not because the ideas are bad, but because the timing or focus is off. At both companies, I helped implement frameworks to evaluate features based on ROl, customer impact, and effort, so we could place smart bets without losing momentum on core initiatives. That balance is what keeps the engine running while still pushing boundaries.

In the end, the real strategy is making sure operational improvements don’t feel like constraints, but enablers. When teams see that efficiency gives them more freedom to innovate, not less-that’s when the flywheel starts to turn.

How is Al transforming product management, and do you see a future where Al-driven insights replace key decision-making processes?

Al is fundamentally reshaping product management-both in how we build products and how we make decisions about them. We’re already seeing a shift from intuition-driven roadmaps to insight-driven ones, where decisions are increasingly grounded in user behavior, telemetry, and predictive analytics. At NetApp, for example, when we built AISA, our Al-powered support assistant, we didn’t just apply Al to the customer experience—we also used data from it to inform future product decisions. Things like the most frequently asked queries, time-to-resolution metrics, and escalation patterns helped us prioritize backlog items and design better self-service flows.

But beyond analytics, Al is becoming a collaborator. With the rise of LLMs and tools like Text-to-SQL, which we’re building to empower non-technical users, the role of the PM is evolving. We’re no longer the only bridge between business and engineering; Al can now translate natural language into technical outputs. That forces us, as product managers, to move up the value chain to focus more on framing the right problems and orchestrating across functions, rather than just translating requirements.

Now, will Al replace key decision-making? I don’t think so, at least not entirely. What I do see is a shift toward AI-augmented decision-making. The best PMs will be the ones who ask better questions of the data, understand the context behind the numbers, and apply judgment where the models fall short. For example, Al might tell you that users are dropping off after step three in a workflow, but it won’t tell you why that step feels frustrating or misaligned with user expectations. That still requires human empathy and qualitative insight.

That said, I do think Al will increasingly own decisions in well-bounded, data-rich environments pricing optimization, A/B test evaluations, or incident response. And that’s a good thing. It frees up mental bandwidth for strategic thinking, user empathy, and long-term vision areas where human product leaders add the most value.

So in my view, the future of product management isn’t about being replaced by AI, it’s about becoming more effective by learning how to lead alongside it.

Based on your experience with Fortune 50 companies, what common mistakes do large enterprises make when adopting new technologies?

One of the most common mistakes I’ve seen Fortune 50 companies make when adopting new technologies is jumping straight into implementation without clearly defining the problem they’re trying to solve. There’s often a rush to “check the box” on adopting the latest trend-whether it’s Al, DevOps, or cloud-native architectures-without a strong alignment between business goals and technical strategy. I’ve been in conversations where the focus was more on deploying Kubernetes or integrating a new LLM rather than asking, “How does this move the needle for our customers or our teams?”

Another big pitfall is underestimating the complexity of change management. Technology adoption isn’t just about installing new tools-it’s about evolving culture, processes, and people. At NetApp and Harness, when we introduced automation or AI-based solutions, we always paired them with enablement plans, cross-functional alignment, and stakeholder buy-in. In large enterprises, especially if you don’t bring teams along for the journey-engineering, operations, security, even finance-you end up with shelfware or shadow IT. I’ve seen that happen more than once.

A third mistake is not investing early in scalability and governance. Enterprises often start with successful pilots, but fail to plan for what happens when that pilot needs to serve hundreds or thousands of users. I’ve worked with companies where initial success was quickly followed by operational bottlenecks because there weren’t proper controls, observability, or cross-cloud policies in place. That’s why, when I led initiatives like the iPaaS platform at NetApp, we designed from day one with API-first and enterprise scalability in mind.

And finally, there’s often a disconnect between procurement and product teams. The people buying the tech aren’t always the ones implementing or using it, which can lead to misaligned expectations. Bridging that gap by including engineering, product, and support in the evaluation process is something l’ve consistently advocated for, especially when working with large clients across industries.

In short, adopting new technology in a large enterprise isn’t just a technical initiative-it’s an organizational shift. Success comes when the right problems are being solved, the right teams are involved, and the path to scale is clear from the start.

With automation increasing, how do you ensure that human creativity and critical thinking remain central to business decision-making?

Automation is incredible at eliminating inefficiencies, but if you’re not careful, it can also lead to decision fatigue or a false sense of confidence in machine-generated outputs. For me, the key is being intentional about where automation adds value and where human judgment is still essential.

At NetApp, when we built AISA, our generative AI-powered support assistant, we designed it to handle routine and repeatable tasks, things like answering frequently asked support questions or retrieving documentation. But we were very clear about drawing the line: when it came to nuanced customer issues, product roadmap prioritization, or interpretation of ambiguous data, we always brought humans back into the loop. We didn’t want the AI to replace thinking; we wanted it to create space for thinking.

One way I ensure creativity stays central is by designing workflows that surface insights, not answers. For example, with our Text-to-SQL tool, the point isn’t to make decisions for analysts; it’s to remove technical barriers so they can ask better questions. By lowering friction in exploration, you enhance creativity and critical thinking because people can iterate faster, see more angles, and dig deeper into “why,” not just “what.”

Another strategy is embedding a critical review into the process. Whether it’s a product spec, a roadmap decision, or a GTM strategy, I build in deliberate moments where we stop and ask:

Does this make sense? Are we solving the right problem? What assumptions are we making? Even with AI-powered recommendations, I challenge teams to treat them as starting points, not final calls.

Lastly, culture plays a big role. If your team feels safe to question things, to offer alternative ideas, and to explore without penalty, then creativity thrives-even in a highly automated environment. Some of the best ideas I’ve seen, like a framework tweak to streamline Continuous Delivery or a UX change to improve onboarding-came from engineers, designers, or customer support reps who had the space to step back and think differently.

So, to me, the goal of automation isn’t to replace human insight-it’s to amplify it. It’s about creating the conditions where people can focus on what truly requires judgment, empathy, and imagination. That’s where the real value lies, and that’s what I aim to protect and enable as a product leader.

If you had unlimited resources and no constraints, what groundbreaking product or innovation would you build to redefine the future of cloud computing and enterprise technology?

If I had unlimited resources and no constraints, I would build a fully autonomous enterprise platform-something l’d call a Cognitive CloudOps Fabric. Think of it as a unified, Al-native operating layer that sits above all public clouds and enterprise systems, with the intelligence to observe, predict, optimize, and act without human intervention, but with human oversight.

Right now, cloud computing is still largely reactive and fragmented. Even in advanced organizations, teams are dealing with dozens of dashboards, siloed metrics, disconnected tools, and manual decisions that slow everything down-from performance tuning to cost optimization to compliance. What I envision is a system that changes that entirely-something that’s self-healing, self-scaling, and self-optimizing, all in real time.

This platform would combine three foundational elements:

First, a context-aware telemetry engine that ingests and correlates data across the stack-infra, app, user behavior, costs, and even external factors like weather, market conditions, or geopolitical risks.

Second, a closed-loop Al engine trained not just on logs and metrics, but on business outcomes. It wouldn’t just tell you “CPU is spiking”—it would understand whether that spike is affecting your most profitable customer, and adjust resources or workflows accordingly.

And third, a natural language interface that allows anyone in the organization-developer, analyst, or executive, to query the system or give it high-level directives. “Optimize my pipeline for cost and latency,” or “simulate impact if we switch cloud regions.” Think of it like ChatGPT meets GitOps meets Site Reliability Engineering, at a planetary scale.

It would blur the lines between observability, governance, cost management, and even strategic planning-bringing all those elements under a single, Al-governed control plane.

Now, of course, building this wouldn’t just be a technical challenge—it would require a rethinking of how enterprises trust, govern, and collaborate with Al systems. But if we could crack that, I believe it would completely redefine how organizations build, operate, and scale in the cloud.

It wouldn’t just save time or money-it would fundamentally shift the role of operations from maintenance to innovation. That’s the kind of future I’m excited about—and it’s the kind of problem l’d love to tackle if given the freedom to build without limits.

“Al Is Not the Future, It’s Already Here”: Driving Innovation at the Crossroads of Al, DevOps, and Cloud at NetApp

As Director of Product at NetApp, I’m focused on bridging AI innovation with real-world operational impact. One of the most transformative projects I’ve led is AISA*—our GenAl-powered support assistant. Built using large language models (LLMs), Retrieval-Augmented Generation (RAG), and Azure-hosted vector databases, AISA* now delivers faster, more accurate resolutions to customer issues. It’s already cut support response times by 10% and surfaced over $1M in new opportunities by earning deeper trust with our customers.

In parallel, I’ve been exploring how Al can streamline DevOps workflows—from intelligent incident triage to automated root cause analysis-bringing greater efficiency to our internal teams. I’m also incubating a Text-to-SQL concept aimed at empowering non-technical users to query data using plain language, making insights more accessible across the business.

For me, the goal is clear: integrate Al meaningfully into everyday cloud and DevOps practices-not as a buzzword, but as a force multiplier that improves how we deliver, operate, and scale our products.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button