NextFin News -- In today’s world, where AI technology is advancing at breakneck speed, a profound industrial transformation is accelerating across the globe. From rapid iterations of foundation models to the emergence of intelligent agents, AI is shifting from a cutting-edge technology into a core force that drives business growth and reshapes entire industries. Yet even as this technological revolution unlocks boundless opportunities, it has also triggered widespread “AI anxiety”—companies worry about missing the window and being overtaken by competitors, while also fearing that massive investment may fail to deliver measurable returns. How to cut through the fog and turn AI from a “sounds great” concept into “actually works” productivity has become a critical challenge facing every business leader.
DeepPractice is a video podcast series by TMTPost that focuses on the evolution and real-world deployment of AI technologies. Setting aside abstruse, flashy buzzwords, it takes a deep dive into implementation paths, decision-making logic, and hard-core operational details. In this episode of DeepPractice, we invited Chen Xudong, Chairman and General Manager of IBM Greater China, and Xiong Yi, Senior Vice President at Schneider Electric and Head of Strategy and Business Development for China, to discuss how enterprises can break through the impasse in digital transformation in the AI era.
The Dilemma Behind Companies’ Dual Anxieties
Amid rapid AI iteration, corporate anxiety is no longer a single, narrow confusion about how to use a technology. Instead, it is the result of two layers of uncertainty—one from the broader macro environment and the other from on-the-ground implementation—stacking on top of each other. This has become a widely felt pain point across industry today. On one hand, AI evolves so quickly that what is cutting-edge today may be outdated tomorrow. On the other hand, once AI is deployed, how can its value be demonstrated so that the business realizes tangible gains? These two questions together form the anxiety weighing on today’s enterprise users.
Chen Xudong attributes companies’ core anxieties to two main dimensions. The first is the systemic uncertainty brought by shifts in the macro environment: sharp swings in input prices such as oil and precious metals, frequent changes in geopolitics and regulatory rules, and the ongoing pressure to raise productivity—all of which make it difficult for CEOs to make steady strategic judgments. The second is anxiety about getting AI applications into real-world deployment. Although global spending in AI this year is expected to reach US$2.5 trillion and AI’s commercial value is widely viewed positively, most companies have yet to see clear results from their AI initiatives. The “afraid of falling behind, yet afraid the investment will be wasted” mindset has left many firms hesitant in their AI planning.
Schneider Electric’s observations are closer to the day-to-day pain points of real-economy sectors such as energy, industry, data centers, and infrastructure. Xiong Yi breaks corporate anxiety down into a dual shock from technology and macro-level costs. From a technology deployment perspective, the fixed processes that used to underpin digital transformation have become hard to match with today’s pace of iteration. An enterprise-grade AI project typically takes 1.5 to 2 years from planning to rollout, and the rapid evolution of AI has companies worried that what they invest in may become outdated before it delivers benefits—making ROI assessment a major challenge.
From a macro-environment perspective, even as companies improve efficiency at the micro level through lean production and AI adoption, market shifts and supply-chain uncertainty can overturn those efforts outright. Building organizational resilience to withstand sudden macro shocks has become a central need for real-economy businesses. At its core, this anxiety stems from a mismatch between the speed of technological progress and the pace at which enterprises can implement it, as well as the tension between micro-level efficiency gains and macro-level volatility—leaving companies trapped in the dilemma of “not using AI isn’t an option, but using it feels risky.”
AI Deployment: Platform-based Investment is Key
In the face of multiple anxieties, the way forward is not to chase the AI trend blindly, but to anchor on strategic discipline—shifting from isolated pilot attempts to sustained, platform-based investment, so that AI applications are truly tied to business value.
From the practical standpoint of real-economy enterprises, Xiong Yi put forward three core principles for putting AI into practice—also the key yardsticks for judging whether an enterprise has truly implemented AI.
First, build a platform mindset.Move away from scattered, single-point applications and establish an enterprise-level AI capability system so that data, know-how, and technical capabilities can be accumulated and reused. Schneider Electric has embedded AI throughout the three-layer EcoStruxure™ architecture—access & adaptation, operations & control, and management & optimization—and has built a unified data platform to enable intelligent energy and industrial operations. This is platform thinking put into action.
Second, stay scenario-driven and insist on a quantifiable return on investment.AI applications should focus on scenarios that can create value quickly, shortening the ROI payback period as much as possible. Set clear, measurable targets for projects—such as hours saved, staffing optimization, and improvements in production efficiency—rather than vague “efficiency improvements.” Schneider Electric identifies internal use cases by running its AI “DaShi Cup” competition, and when selecting projects it prioritizes hard indicators such as whether they can raise overall labor productivity—achieving “big results with small budgets.”
Finally, encourage spontaneous, bottom-up adoption.When AI tools truly solve employees’ pain points—for example, cutting a two-hour production-planning task down to five minutes—employees will use them proactively. This organic adoption is far more effective than top-down mandates, and it represents the best state of AI implementation.
Chen Xudong, meanwhile, suggested that enterprises should focus on action and accumulation.
First, accelerate the pace of digital transformation.Supply-chain resilience can be improved on the one hand through digital means, and on the other hand may require optimization in areas such as organizational management;
Second,as for AI-related anxiety, in the face of an unstoppable AI wave, companies should still conduct a certain degree of experimentation.He cited IBM’s practice of using itself as “client zero”: when leveraging AI to improve office efficiency in HR, finance, and other functions, the company’s resolve was very firm, and after rapid trial-and-error and iteration it has already generated considerable ROI;
Third, mobilize employees internally to look for steps that can be optimized.Help everyone understand what AI can do, then encourage them to identify where AI can make a difference, and only then move on to larger-scale investment.
When AI applications move from pilot programs to scaled deployment, enterprises need to choose a unified platform to avoid repeated investment in isolated, one-off projects. IBM’s watsonx platform and the watsonx Orchestrate system are designed to address the management and coordination of enterprise AI applications, enabling AI solutions across different departments to call on one another and form an integrated system.
An earlier report jointly released by Schneider Electric’s Business Value Research Institute and IBM, AI for GREEN——Scenario-Driven AI Applications to Achieve a Breakthrough in Enterprise Value, also made it clear that enterprises’ expectations for AI value are shifting from a single dimension to a more multi-layered one, with three notable trends: expanding from a focus solely on business returns to also emphasizing social and environmental value; moving from value orientation at the macro decision-making level down to micro-level individual experiences; and shifting from pursuing short-term growth to prioritizing long-term value breakthroughs. Based on this, the report put forward the “AI for GREEN” value proposition, arguing that AI can help enterprises deliver five major types of value: Growth, Reliability and Resilience, Efficiency and Satisfaction, Environment (sustainability), and NewHorizon (entirely new business models).
Notably, the core logic of AI deployment has already shifted—from being driven by IT departments in the past to being driven by business units today. Chen Xudong noted that past digitalization efforts mostly addressed general needs such as finance and supply chains, whereas AI can optimize processes around a company’s specific business pain points. This requires business units to articulate real needs, with technical teams providing support, forming a “business + technology” co-creation model. This model also makes enterprise AI applications better aligned with real operations, avoiding a disconnect between technology and the business.
Overall, in the process of putting AI applications into practice, enterprises need a scenario-centered implementation path.
Step 1: Align on a shared understanding and plan the full map. Enterprises should align on AI strategy through cross-team communication, and use a value framework to comprehensively sort through business processes, producing a clear panoramic map of AI scenarios and defining where enablement should occur—thereby reducing the cost of trial and error.
Step 2: Focus on scenarios and iterate in small, fast steps. Based on the panoramic map, assess technical feasibility, resource requirements, and risks to identify the specific scenarios that should be prioritized for implementation. Rather than chasing an “all-purpose star” project, companies should form cross-functional teams and start with “specialist” projects that address discrete problems within a process, iterating agilely to validate value quickly.
Step 3: Data accumulation to build differentiation. High-quality data is AI’s “fuel,” while a company’s unique industry know-how and experience are what will form its core competitiveness in the future. Companies need to establish efficient data-processing pipelines and governance mechanisms, and deliberately uncover and organize the tacit knowledge hidden in documents, processes, and experts’ minds—turning it into a recordable, reusable corporate knowledge base.
Step 4: Democratized enablement, innovation for all. Successful AI applications should not be confined to technical teams; they should enable the organization at scale. This requires closing the cognitive gap between business and technology within the organization, so that non-technical employees can participate more in technological innovation.
AI's Integration Brings Opportunities for Growth
“At the end of AI is compute, and at the end of compute is energy.” This industry consensus has made deep integration between AI and energy a central direction for industrial development, creating brand-new opportunities for both technology companies and energy-tech players. Meanwhile, China’s advantages in technology, application scenarios, and cost have given this integration a stronger foundation for real-world deployment.
For Schneider Electric, the energy challenges brought by AI’s rapid rise are essentially a dual issue of power supply and power management. On the one hand, building compute centers faces power bottlenecks; Schneider Electric is addressing the question of whether power is available by advancing initiatives such as direct connections to renewable electricity and new power architectures. On the other hand, peak power fluctuations from AI workloads are faster and less predictable, and traditional power-supply solutions can no longer keep up. Schneider Electric is exploring new technologies such as electrochemical energy storage and flywheel storage to respond to power peaks in seconds, while also refining power management from the cabinet and server levels down to the chip level—addressing whether power is being used well. This process has also driven changes in Schneider Electric’s business model and technology system: shifting from straightforward product sales to a co-creation model of joint R&D with customers, and from supplying power-side peripheral equipment to deep integration with the core of compute.
IBM has provided technical support at the AI-technology layer for AI implementation in the energy and industrial sectors. For visual inspection scenarios with extremely high yield rates, IBM introduced a reverse learning approach that builds a “perfect product” model to detect anomalies, addressing the AI training challenge many physical-industry companies face when defect samples are scarce. At the same time, IBM has continued to deepen its efforts in hybrid cloud and AI, integrating traditional AI with generative AI on the Watsonx platform to offer enterprises an end-to-end solution—from data management and model training to AI Agent orchestration—making it a core choice for companies building their AI platforms. The previously co-released “AI for Green” report and the GROWTH model also laid a theoretical foundation for integrating AI with energy.
From anxiety to breakthrough, from isolated pilots to industry-wide collaboration, enterprises’ AI transformation is entering deeper waters. This transformation is not a one-off technology upgrade; it is a comprehensive reshaping of corporate strategy, organizational structure, and business models.
Below is the transcript of the conversation:
Liu Xiangming: Hello everyone, and welcome to TMTPost’s video podcast Deep Practice. Today, we’ll be focusing on two keywords that are drawing a lot of attention right now: anxiety and opportunity. First, please allow me to introduce today’s two distinguished guests: Mr. Xiong Yi, Senior Vice President of Schneider Electric and Head of Strategy and Business Development for China. Mr. Xiong, welcome. Our other guest is an old friend—Mr. Chen Xudong, Chairman and General Manager of IBM Greater China. Mr. Chen, welcome.
AI Agents Are Here—What Are Companies Anxious About?
We’ve just wrapped up the Spring Festival holiday, but for many people in the industry, it probably didn’t feel all that festive. A steady stream of news has been breaking lately—around large models, embodied intelligence, and more—and after the holiday the industry started buzzing again about “lobster.” So when people have been meeting up recently, one buzzword keeps coming up: “What have you been anxious about lately?” Today, let’s start with personal anxiety—could Mr. Xiong and Xudong share what you’ve each been feeling anxious about recently?
Xiong Yi: Hello, everyone. Mr. Liu’s question just now was really interesting—what are we anxious about? Personally, I’m actually a fairly optimistic person, and I rarely feel anxious. What I think about more is how we respond to the various technology-application practices we’re facing right now, and how we can truly put into practice the concepts we talk about every day—like the “lobster” you just mentioned.
So I think it’s very important to actively try new technologies. In today’s era, we especially need to ask ourselves: if so-called “zero-employee companies” emerge in the future, will we ultimately be the “zero,” or the “one”? That’s what I’m paying relatively close attention to right now.
Chen Xudong: We actually have something very specific in mind. During the Spring Festival period, there was a major stock-market swing—especially among some global software companies—and it was closely tied to developments in AI. In particular, AI’s revolutionary impact on programming and code writing: before, people were mostly just playing around with it, but once it touches enterprise-level applications, some have suggested using AI to rewrite mainframe code. But recently the (share price) has been slowly recovering, because people realized it’s not that simple. Just translating the code once is still a long way from modernizing an entire IT system. So I wouldn’t call it real anxiety, but it did trigger enormous market volatility.
Liu Xiangming: Let’s make it more concrete: from the perspective of your current roles, what is the core of the market’s anxiety? And at the industry and enterprise levels, how should we understand these anxieties—and how should we respond?
Chen Xudong: From what we’ve observed, customers are currently anxious on two main levels.
First, it’s the uncertainty brought about by shifts in the macro environment. The broader environment is changing far too quickly, and many things that used to be hard to foresee are now happening with increasing frequency. For example, key input prices such as oil have been swinging sharply in the near term. Geopolitical tensions, changes in laws and regulations, and the mounting pressure to raise productivity together form the core set of issues that leave many CEOs deeply uneasy.
Second, it’s anxiety over the adoption of artificial intelligence (AI). Although AI has entered the mainstream public conversation, real-world deployment inside many companies has yet to deliver clearly visible results. This has triggered a widespread worry: if competitors manage to apply these technologies successfully, will they pull far ahead? As a result, how to use AI effectively to accelerate gains in competitiveness has become a major source of anxiety. Some experts predict that global spending on AI will reach roughly US$2.5 trillion this year, further intensifying the sense of urgency around the issue.
Xiong Yi: We’re seeing many of the same things. Based on what we observe across a wide range of industry clients we work with—and in our own business as well (which spans industrial, infrastructure, data centers, buildings, and other areas)—people’s concerns mainly concentrate on two fronts.
First, the technology itself. The most fundamental anxiety is the pace of iteration—it’s simply too fast. As I mentioned in a discussion with colleagues this morning, in the past when companies pursued digital transformation or adopted AI, they typically followed a fairly fixed playbook: align leadership thinking at the top, run training, then find a technical team or vendor to implement the project, and finally roll out applications that improve efficiency in specific scenarios—such as quality inspection, computer vision, or deploying robots in factories.
But now we’re finding that it has become extremely difficult to evaluate ROI on projects like these, because the technology is evolving so quickly. An enterprise project—from planning to launch to generating real impact—usually takes 18 months to two years, which isn’t considered slow for large companies. Yet the development speed of AI and related technologies makes it almost impossible to imagine what they will look like two years from now. That is the biggest source of anxiety: do we use it now or not? If we don’t, it feels like everyone else is doing it; if we do, we worry that costly investments will become obsolete quickly—and many systems can’t be iterated and upgraded at any time. This is a very critical issue we’ve seen on the technology side, both in serving clients and in our own practice.
Second, anxiety driven by uncertainty in the macro environment. At times, shocks and impacts from the macro environment are fundamental. So how to build organizational resilience to cope with sudden, macro-level disruptions is another widely shared concern I’ve observed.
Facing Anxiety Head-On: What Should Companies Do?
Liu Xiangming: When it comes to companies’ core anxieties, what advice would you give?
Xiong Yi: I think we can start with recommendations at the macro level, and then gradually drill down into practical execution. For a company, regardless of its size, it is first and foremost a large organization. So comprehensive consideration, integrated planning—or, put another way, a big-picture mindset—remains crucial. We have been dealing with the various challenges mentioned earlier (technology iteration, macro-level uncertainty) for many years, and we’ve also invested a lot in technology and done a great deal of detailed work, yet it still feels like we’ve been missing a platform-level way of thinking. In more corporate terms, that means overall top-level design at the level of strategic planning. I believe this is something that needs to be thought through in advance—or rather, something that should be started now.
Second is how to find scenarios where technology applications can deliver results quickly. In the past, we tended to look for scenarios that could be implemented rapidly; now, under the overarching planning framework mentioned earlier, we need to look for projects with shorter payback periods. Previously, a project’s payback period might have been a year and a half—can we shorten it to one year, or even six months? We need to be able to explain clearly whether, after a use case goes live—for example, with an investment of RMB 1 million—it can start generating benefits after six months, or whether that RMB 1 million can be recouped within two or three years. In short, there needs to be a relatively clear ROI expectation.
Third, in my view, whether it’s AI or any other technology, long-term accumulation is essential—including the accumulation of data, internal capabilities, and talent. A possible solution—or what we are trying right now—is that we can’t rely only on one-off, ad-hoc task forces assembled for specific scenarios to fight isolated battles. Instead, we need to let these technology capabilities, data assets, and people’s experience gradually settle and compound, forming a reusable and continuously iterative foundation. At the same time, throughout this process, people’s capabilities can also be trained and improved in a systematic way. Even though the external environment is full of uncertainty, these may be the areas where companies have clear internal certainty about what they need to advance and build.
Chen Xudong: On the two anxieties I raised: the first is that circumstances change fast—how should we respond? The price fluctuations and supply-chain disruptions that Mr. Xiong mentioned actually surfaced a few years ago. Our recommendation is: first, accelerate the pace of digital transformation, integrate information for analysis and research, and enhance supply-chain resilience. This requires digital tools, and it may also require organizational management—possibly even bringing in a consulting firm as an advisor.
The second point is also AI-related: people are worried about falling behind, but they’re also afraid that technology investments will become obsolete very quickly. Another major issue is that, in many cases, the return on investment (ROI) for AI is still hard to calculate clearly right now—this is a real challenge. But looking back at past informatization (IT) spending, it wasn’t necessarily that easy to quantify either. What’s interesting is that in this round of AI, everyone is especially eager to pin it down—probably because it’s still not entirely clear what AI can actually do, and they’re afraid of wasting too much money. But in the face of an unstoppable AI wave, my advice is: you must try.
How big you start depends on the enterprise. A company like IBM, for example, implements AI across the board without hesitation, applying the technology within our own business. Internally, we call ourselves the “customer zero.” For instance, when using AI to improve the efficiency of HR and finance office work, we’ve been relatively “aggressive.” That comes down to each company’s level of resolve.
At the beginning, you’ll definitely run into situations where AI doesn’t work well. But once you know where it falls short, you know how to optimize it—and that becomes accumulated learning for the company. So you have to start, and use that process to build organization-wide capabilities. Mobilize everyone to find places that can be improved; help everyone understand what AI can do; and then make bigger investments. That’s roughly the path. But you must begin—this isn’t something you can achieve overnight. Especially in an enterprise setting, you can’t just drop in a system and expect it to work.
Liu Xiangming: Both of you explained that extremely well. Let me briefly summarize: First, companies still need to maintain strategic composure—after all, this (responding to change) is a major undertaking, and you can’t panic blindly. Second, you have to take action. Just as Mr. Xiong said a moment ago, you can start with concrete practices—like trying “crayfish”—rather than staying at the level of discussion. When facing something entirely new, you need first-hand experience and direct understanding. On that note, let me take this opportunity to ask: how is your “crayfish” training going now?
Xiong Yi:We’ve only just started trying it. I believe that an intelligent agent that can work 24/7 nonstop, keep learning continuously, has no emotions, and doesn’t get tired—so long as data security allows and it’s given enough information—will definitely do better than we do. That’s one takeaway I’ve had.
As AI Products Surge, How Should Companies Adapt?
Liu Xiangming: Yes—and that actually sets up my next question. Thinking back to a year ago, when we were discussing AI, technological progress was already moving fast, and DeepSeek’s breakout was really just around last year’s Spring Festival. Looking back now, it feels like it was five or six years ago—time has gone by incredibly quickly.
Then the emergence of agents like “Xiaolongxia” makes it feel as though AI used to be something that merely helped you answer questions or gather information, but now it has seemingly “grown hands and feet.” Those “hands and feet” may not be fully developed yet, but they can already help you carry out certain tasks and connect different steps together.
That, in turn, leads to the question I’d like to ask both of you: AI used to be more of an assistive tool, but now it may truly be becoming an automatic, self-starting node in the workflow. This means companies need to continuously redefine and reassess what AI is. How do you view how rapidly AI has changed over the past two years? And how should companies adjust themselves to adapt?
Chen Xudong: On AI, our company put forward an AI strategy seven or eight years ago. After generative AI (GenAI) emerged, I feel many people still haven’t fully distinguished what’s different about it at the level of mindset and understanding. I’d like to take this opportunity to share my view: with generative AI—especially across two “worlds”—its emergence and the changes it brings are not the same.
The first world I call the “representational world,” meaning areas that don’t need to be directly tied to hardware at all. Whether it’s language, images, video, writing code, or drafting emails—work done in front of a computer all belongs to this representational world. This world has already been rapidly disrupted. Generative AI is like a powerful assistant that can help you complete a lot of work. If there were no security concerns, it would be an exceptionally good tool. But the news this morning also specifically mentioned that when enterprises use such tools, it’s best not to connect them to the public internet; otherwise there may be risks—for example, an AI agent might steal corporate information during interactions. So for now, it’s still hard to draw firm conclusions about how this area will ultimately play out.
The other world I call the “physical world,” and the changes there are also enormous. For instance, robotics started “dancing” last year; this year it has become especially impressive—robots can even do flips. Many advances in generative AI are being applied at scale in the physical world, and the most widely deployed application at the moment is actually autonomous driving. In the future, the job of driver is very likely to be replaced; from a technical logic standpoint, that’s no longer an issue. Beyond that, robots can help with household chores or work on production lines. Right now, many assembly lines still have to rely on human labor, because robots still can’t fully replicate the tactile sense and judgment humans have in fine operations (such as pressing, or perceiving what’s happening during assembly). But in other areas—such as surgery—robots have already performed extremely well. The pace of technological evolution varies across different domains.
Therefore, for businesses, the first step is to understand what changes AI has brought to the world of meaning and to the physical world, and then think about how those changes will affect their own companies. Take the visual inspection technology mentioned earlier: I used to think it had already been applied very widely, but after visiting companies, I found that adoption is still far from where it needs to be. In many scenarios, people are still relying on the naked eye to do time-consuming, labor-intensive inspections, and the requirements in many specialized scenarios are not something off-the-shelf algorithms can handle.
So everyone needs to get clear on what AI can do first, and then go find the right scenarios inside the business. Don’t be fooled by surface appearances—figure out exactly what it can do and what it can’t. Every department head needs a very clear understanding of what AI is capable of, and then use it as a tool to improve efficiency. There are still many optimization opportunities on production lines that we haven’t captured, and many companies are still a long way from doing well in this area. This has nothing to do with generative AI; it’s traditional AI. It shows that people still haven’t fully used the “old” AI, and the “new” AI has already arrived.
Xiong Yi: At this stage, AI applications are indeed going deeper—moving from supporting processes into core business processes, and even extending into control of the physical world. Through data collection and model-based analysis and prediction, we ultimately achieve control. We’ve also done some projects where, at certain specific nodes, we’ve begun to implement this kind of process control or node-level control. The logic is roughly this: convert expert experience, or information gathered by smaller models, into knowledge for a large model; then have the large model issue instructions that drive the smaller models to execute. That’s the general idea.
For example, with the visual inspection we just talked about, once the product comes off the line it needs to be visually inspected. We once had a factory deploy this system. Previously it required three people rotating through three shifts, each watching for about eight hours (in reality they couldn’t watch continuously and needed breaks), and all three were needed to keep inspections running nonstop.
Later we adopted visual inspection. But at the beginning, because our product quality was very good and defects were rare, the system lacked error samples to learn from and couldn’t tell what counted as a defective item. Without samples, no matter how “smart” AI is, it can’t be put into practice.
The second reason is cost. Many of our customers—including our own factories—face cost issues when using this type of technology. For example, in highly customized assembly steps, or even an operation as simple as tightening a small screw, if you use robots or robotic arms, costs can be very high when customization requirements change frequently, because you have to keep reprogramming or recalibrating. By comparison, manual work can actually be cheaper. So you have to find the most economical approach, and a robot/AI/robotic-arm collaborative solution isn’t necessarily the most cost-effective one.
The automotive industry may be somewhat unique because it is highly standardized. But we have a large number of discrete manufacturing scenarios, and in the end it comes down to economic returns. So why is it that when AI technologies enter core business processes—actually replacing manual work through an integrated software-and-hardware approach—the efficiency or cost-effectiveness is not necessarily the highest? This is a problem many enterprises run into. As long as customers have personalized requirements, it’s a challenge you simply can’t avoid.
Chen Xudong: Let me add one point on the customization issue just mentioned. Customization really is a cost “killer”—it drives costs up significantly. But this is a good opportunity to introduce IBM’s solution. For example, in visual inspection, IBM provides a platform. What’s distinctive about it is that it doesn’t require you to customize for a specific scenario; instead, it can automatically train models for different scenarios. That way, the deployment cost for each new scenario is relatively low, and you don’t need to assign people to develop a bespoke solution for every scenario. So companies like IBM build platforms like this so that after an enterprise succeeds at one internal use case, it can roll it out to other areas on its own.
In addition, on hardware requirements: in the past, visual inspection placed extremely high demands on cameras—one camera could easily cost well over a hundred thousand yuan. Now, we can even achieve inspection with a quick photo taken on a smartphone, which greatly lowers the hardware threshold. Therefore, from a technical standpoint, these kinds of solutions can be adopted and scaled to a certain extent.
Response of Schneider Electric and IBM to Challenges
Liu Xiangming: Let’s get a bit more focused. Mr. Xiong, what I’m particularly concerned about is that Schneider Electric is facing digital and intelligent transformation, along with various international dynamics and broader macro challenges. In your view, what is the biggest challenge right now? And how are you responding? This is something everyone cares about.
Xiong Yi: There are indeed many challenges. From a strategic perspective—whether globally or in China—we should focus on the things we have been doing all along, and that, amid the current technological revolution and the changes brought by AI, we should continue doing—indeed, do even better.
The core is this: against the backdrop of the energy transition, we must maintain “strategic resolve” in energy technology. Rather than blindly chasing hotspots, we continue to deepen our expertise in the energy domain and align our products, solutions, services, and even our overall system in that direction. I think that is what we need to stay highly focused on right now.
To put it a bit more expansively: whether people are talking about how “the ultimate constraint on computing power is electricity,” or about the broader macro environment and the global landscape, energy has unavoidably become a truly central theme. Energy competition, energy governance and control, and the energy security that countries keep emphasizing—all revolve around this. Particularly right now, traditional energy is under enormous pressure to become greener and more sustainable, while new forms of energy (or, more precisely, a new energy landscape centered on electricity) are rising. What should we do? We should align the series of transformations we just discussed in this direction. This is both our challenge and our biggest opportunity.
Why is it a challenge? Because the traditional power landscape is undergoing a fundamental shift. I believe that even in China, our past energy mix dominated by thermal power was not the endgame—it is changing dramatically. From a corporate perspective today, even the smallest companies are adopting some renewables, such as deploying battery storage or installing rooftop solar panels. The disruption brought by these new energy technologies is reshaping the entire power-system structure—from the more centralized, one-way “grid-to-user” model to a more distributed, multidimensional energy-use landscape where “microgrids + the main grid” work in coordination.
As this landscape evolves, what we’ve accumulated over the years may be an opportunity or an advantage—but it’s also a challenge. Under the overarching trend of the energy transition, we want to rely on our new products and technologies to keep moving forward—staying in step with, and even leading, energy-tech development in the parts of the market changing the fastest. I think that is the biggest challenge we face right now.
Liu Xiangming: And what about IBM’s response?
Chen Xudong: From a global perspective, IBM has been steadily reshaping itself into a company centered on software and consulting. Over the past 10 to 20 years, this shift has been profound. At present, our software business accounts for 45%, while hardware has fallen to below 25%.
As we enter a new era, we call it the era of hybrid cloud and AI. The biggest challenge is how to continue leading enterprises’ IT modernization or digital transformation in the AI era. IBM itself is a software company, and many of our employees work at a computer. As we just discussed, this kind of work can be optimized and efficiency can be improved. So IBM, as “Customer Zero,” has also undergone many changes. One major challenge we face is: can we stay ahead of our clients? That’s why we call ourselves “Customer Zero”—any technology and solution we develop is first piloted internally, and only after it has matured do we share it with clients as a proven case. This is both a challenge and a tremendous source of motivation. Over the past few years, we’ve also gained many clients in the AI space.
Moreover, IBM’s exploration in AI didn’t start with generative AI. We launched our Watson platform more than a decade ago—back then, we called it traditional AI, including applications such as visual inspection. After generative AI emerged, we upgraded it to the Watsonx platform, making it compatible with both generative AI and traditional AI, so that the platform can help clients solve a broader range of problems. As a result, IBM’s role is more about helping clients identify issues they may have—and then solving them.
But in the process, many of these issues also exist within IBM itself. For example, with an organization as large as ours, providing employees with support across finance, HR, and other functions used to require massive investment. Now, more than 50% of that work has been replaced by AI. Internally, the journey also came with its bumps and bruises—at the beginning it didn’t work well and there were lots of complaints, but once we got through the run-in period, it quickly delivered greater value. So you could say AI’s development is both our “bread and butter,” and, in a way, a source of anxiety—how we can come out on top in this race.
How Far are We from AI at Scale?
Liu Xiangming: At this moment, has AI truly entered the stage of practical deployment and deeper value creation? What is the most essential sign that the industry is genuinely putting it to use?
Chen Xudong: In my view, one key sign is: whether an enterprise already has many AI initiatives it wants to implement. More specifically, it’s whether the company has already identified many concrete links in its own operations where AI can improve efficiency. That implies you understand what AI can do, and you’re able to spot these opportunities—and typically, that ability comes from having had successful hands-on experience before; otherwise it’s hard to uncover those opportunities.
So, in my view, to judge whether AI has truly been rolled out within a company, you look at whether it has already succeeded in one place—and then, on that basis, whether the company itself has proactively identified more opportunities to apply it. It’s a bit like what’s happening at our company: AI applications have now formed a self-reinforcing virtuous cycle. It’s no longer about pushing a specific use case top-down and forcing people to use it; instead, every department is voluntarily driving AI adoption. In my opinion, once you reach that stage, it can be considered a fairly successful start.
Liu Xiangming: To be more specific, you just mentioned that IBM’s HR and finance departments are both using AI. Do you think AI has truly been implemented in these two departments and systems?
Chen Xudong: I think AI has truly been implemented in HR and finance. Because we’re already seeing real results: it has indeed optimized a lot of roles. In the past, many things required you to find someone to ask or to handle; now, in many cases, you basically don’t even know who (or which system) got it done for you. But in the end, it still gets done.
That said, the prerequisite is that the company must have the corresponding systems in place internally. Without those systems, AI alone can’t get these things done. Someone once asked me, “Do you have an AI system for HR?” I said no.
For large companies, HR processes either run on SAP or on some other system. If you don’t have that foundational system, then who records these matters? Someone (or some system) has to log them. There used to be a process; after AI came in, it might be able to bypass some manual steps within that process, but those records still have to be stored somewhere in the end. If you don’t have the underlying system, I genuinely don’t have a standalone “HR AI system.” So AI applications still rest on the foundation of the original digitalization.
Liu Xiangming: Mr. Xiong, what’s your take?
Xiong Yi: That’s right. When it comes to how to measure whether AI has truly been implemented in an enterprise, I actually mentioned a few points earlier—we can bring them back and summarize them. I think there are several aspects:
First, we need to shift from point solutions—single use cases, single scenarios, and single departments—to an enterprise-level, platform-based mindset. We’ve seen cases like this. Supply chain, R&D, and customer service may each be using different AI tools or copilots. That creates a problem: these applications are scattered. Within an enterprise, the first thing you need is platform thinking. If a company uses so many different tools, then in the end, experience and data may be fragmented across various places. In my view, that still can’t be called truly mature, nor does it amount to an enterprise-grade, end-to-end AI capability system. That’s the first point: moving from isolated points to platformization.
Second, there are still questions about return on investment. But personally, I’m quite firm on this. As President Chen just mentioned, looking back at the first wave of informatization 10 or 15 years ago, ROI was also hard to articulate, because it was a process of “taking stock” and figuring out what you actually had. In the early stage of informatization, it was indeed difficult to view it through an ROI lens—whether “it directly helped you make money” or “how much cost it optimized.” But now it’s different, and on top of that, the technology is iterating extremely fast. So I want to strongly emphasize this: if AI has already penetrated deeply into the enterprise, can you clearly explain how long it will take for doing this to deliver tangible value?
As President Chen mentioned with many examples just now—headcount in HR and finance has gone down, customer satisfaction or on-time delivery rates have improved, production efficiency has increased by a certain percentage—these are all measurable. Otherwise, the projects people are doing lack a basis. Therefore, the second clear marker is whether there is a relatively clear ROI measurement system.
Third, look at whether adoption is more bottom-up, or more driven by leadership mandates. If leadership is requiring people to use AI, many may say it’s hard to use and doesn’t match their existing habits, and the result is money spent with no impact. This used to be the biggest obstacle to informatization. Anything that requires people to write reports every day to justify its value is often something whose value isn’t obvious to begin with—so you have to go “hunt” for value. It’s like our IT department: if it’s writing reports every day about how valuable a system is, that actually suggests it may not be valuable, or that converting it into value is very difficult.
Conversely, if it’s something employees initiate spontaneously across the board, the situation is different. For example, in our Wuhan plant, which was recognized as a “Lighthouse Factory” last year, there’s an employee responsible for production planning. He used to create the daily plan by breaking down the weekly plan and consolidating a large amount of same-day data—such as machine utilization and attendance rates. Every day, he had to come in two hours early just to build spreadsheets and lay out the day’s schedule, and he also had to handle all sorts of unexpected situations—for instance, if leaders decided to drop by for a visit, he’d have to adjust the plan again. Now, with an AI assistant, he can get what used to take two hours done in five minutes. He’s genuinely eager to use it—no one has to tell him; he’ll use it on his own. So, I believe whether a bottom-up, self-initiated adoption can take shape is also a sign of maturity. In my view, these three aspects are the key metrics.
AI’s Endgame Is Energy
Liu Xiangming: AI’s endgame is compute power, and the endgame of compute power is energy. As a leader in energy technology, how does Schneider Electric view the energy challenges brought by AI’s rapid growth? What measures have you taken?
Xiong Yi:First, the question is whether there is enough power at all.Electricity is the foundation of compute power, and many places are still facing power bottlenecks. Whether it’s the “Eastern Data, Western Computing” initiative or data centers being built in places like Ulanqab and Guizhou, the fundamental constraint is whether the power supply is sufficient. In my discussions with data center customers, I’ve learned that the biggest bottleneck isn’t power dispatching or compute power itself, but the massive networking and communications costs that come with data migration. But the prerequisite is: you have to have electricity. We’re driving initiatives such as direct connections to green power and new power-system architectures to address power capacity expansion.
Second is whether power is being used well.The power-demand characteristics of AI workloads differ from those of traditional IT workloads. Through our research, we found that AI workloads ramp to peak power faster, fluctuate more sharply over short periods, and those fluctuations occur on the scale of seconds—or even milliseconds—making them hard to predict. Traditional IT workloads tend to follow recognizable patterns (for example, e-commerce peaks in the evening), but AI compute demand is sudden and difficult to control. This means conventional UPS (uninterruptible power supply) solutions are no longer sufficient; new technologies such as electrochemical energy storage and flywheel energy storage need to be introduced to deliver second-level fast response and smooth out these instantaneous power spikes.
However, tackling these issues also means enormous challenges and changes for us. We’re increasingly realizing that the old business model of “sell products, collect payment” no longer works. Now we need to co-develop and co-create with customers—first build the solution together, and only then talk about commercial returns. This requires a fundamental shift in our business model.
In addition, the challenge of technology convergence is intensifying. In the past, we mainly supplied power-related peripheral equipment, but now we need much deeper integration with the computing core (chips and storage). Power delivery has to move from the traditional “rack-level” or “server-level” approach to a much more granular “chip-level” approach (PowertoChips), managing the energy consumption of individual chips and how power is delivered to them. Meanwhile, the technology itself is also evolving—for example, the shift from AC to DC, and from mechanical equipment to solid-state power equipment based on semiconductors (such as IGBT chips). All of these transitions require us to work with major global data center customers to put them into practice and explore them together.
AI Adoption Must Be “Step by Step”
Liu Xiangming: How does IBM view the biggest obstacles to getting generative AI truly implemented inside enterprises? And looking back at the past few decades of enterprise IT enablement—from informatization to digitization to intelligent transformation—how do you think this round of challenges is different?
Chen Xudong:It went through four stages. The earliest was what you might call “computerization,” then came “informatization,” followed by “digitization” and “intelligent transformation.” My view is that these stages are progressive, but not linear. It’s not the case that once you finish informatization and then achieve digitization, you’re done. In reality, some work has to circle back and make up for gaps in informatization. For various reasons—technological progress and others—enterprises move forward layer by layer, while also cycling back and repeating certain steps.
Even now, many companies haven’t finished their informatization work. For example, although Chinese companies may have implemented ERP systems and improved financial management, a large number of manufacturing companies still may not have any system to manage asset management (such as massive fixed assets). How to make these assets work better and extend their service life becomes a new round of informatization work. In the past it was just about managing the books; now it’s about managing physical assets. Only on the basis of this informatization—after accumulating more information—does it become possible to move into digitization.
Up to now, companies have accumulated a vast amount of information and data—but how can it be used to drive the next stage of development? In the past, a lot of data wasn’t leveraged well, especially R&D data. The erroneous conclusions buried in it, among other things, can carry enormous value to be mined. That may require going back to informationization—retrieving that data and building things like R&D management systems. So this process is cyclical and repeating.
In the AI era—especially as generative AI moves toward broad adoption—the prerequisite is that a company’s informatization and digitalization foundation has reached a certain level. In my view, the biggest difference between the AI era and the digitalization/informationization era is this: it used to be driven mainly by IT departments, whereas now we’re entering a phase driven mainly by business units. In the past, it was mostly about solving general-purpose problems (for example, using ERP to handle finance). But today, many of the problems enterprises face can’t be addressed with off-the-shelf, general-purpose software—yet AI tools can be used to optimize processes. That’s where AI can deliver tremendous value.
Even so, it still needs a strong foundation. Think of an agent (like “Crayfish”): it needs to call applications. If those applications don’t exist, who is it going to call? Ultimately, the foundation still has to be a set of applications in place. When enterprises want to automate things, they also need to invoke internal applications to do the work. As a result, this becomes a process of motivating employees or the organization to think about how to optimize the business and improve efficiency, then translating those needs into IT requirements and implementing them. This really becomes the kind of “co-creation” we just talked about, rather than being led by the IT department. So once AI applications become widespread, the approach each company takes may look very different. It won’t be like today, where you look at manufacturing companies and their ERP systems are all more or less the same.
Liu Xiangming: Back to Schneider Electric. You just mentioned that Schneider Electric has already done a lot of AI projects. I’d like to ask you to share some experience in this area. You also noted that you’ve gone from predictive maintenance and machine vision to R&D and the supply chain, and even optimization across the entire production end-to-end process. What experience can you share with everyone?
Xiong Yi: First, you need an overall architecture. Whether it’s for AI or for the existing software digitalization.
Take our EcoStruxure architecture as an example: we divide it into three layers. The top layer is “management optimization” (Optimize), which primarily uses large models to analyze the data coming up from the lower layers and provide decision mechanisms or recommendations. This layer covers many scenarios, such as energy optimization, automation/intelligence optimization, and management information system optimization.
The middle layer is “operations control” (Operate), which is more tightly integrated with edge-side control and physical equipment. Because we have a large installed base of electrical and automation devices, once the data has been processed by models, it must enable closed-loop control. At the same time, this data also needs to be preliminarily analyzed at the edge before it is uploaded.
The bottom layer is “onboarding and adaptation” (Onboard), meaning the layer for connecting all devices and collecting data. Devices themselves don’t “speak,” so the data has to be brought out and captured through various means (for example, evolving from black-box equipment to devices with controls and screens). Much of the foundational work we see, such as data acquisition, happens at this layer.
In addition, a vertical data model or data platform—DataCube—is needed to connect the data required by the top, middle, and bottom layers, so everyone can communicate using a unified language.
Getting the architecture straight is our most important lesson learned—this is the first step. Once you have the architecture, the next step is to identify the respective scenarios. Every year, we hold an AI “Dashibe Cup” competition, which is essentially a company-wide innovation activity with full participation.
For example, the supply chain team can propose more than a hundred ideas or cases each year, such as the production planning optimization we mentioned earlier, predictive maintenance for equipment, and improving overall equipment effectiveness (OEE). Various back-office functions, such as R&D, also put forward ideas—for instance, how to improve development efficiency and reduce reliance on contractors. I believe it’s crucial to seize these scenarios. This internal mechanism runs quite effectively for us: it enables employees to surface ideas and then quickly put them into use through short-cycle projects. That’s probably part of the experience we can share.
What we’ve done internally is a bit like what Mr. Chen described as the “Customer Zero” approach. At the same time, we also empower our customers and ecosystem partners. For example, we work with commercial complexes like Swire: we use AI engines to optimize their energy efficiency—HVAC, elevators, chillers, and so on—helping them improve efficiency and reduce costs, which in turn improves the experience for tenants and consumers. We have many cases like this.
Take partners in the ecosystem as another example—such as system integrators and disk-array vendors. They want to build longer-term partnerships with us, and we help them improve efficiency. If we improve our own efficiency but they don’t, the efficiency of the entire value chain won’t improve. So we share our experience and empower ecosystem partners.
Empowering others by sharing our own practices is also something we see as highly valuable. That’s roughly what I wanted to share.
Liu Xiangming: Let me press a bit further. You just mentioned the AI competition—people came up with a lot of project ideas, but you can’t do them all. How do you evaluate these projects and decide which ones to launch and which ones not to? And on the ROI topic we’ve been discussing—how do you actually assess the ROI of an AI project in practice?
Xiong Yi: First, you absolutely have to evaluate whether it improves work efficiency or production efficiency. The key is whether you can set a very clear, quantifiable target for the project. Some people say, “This is great—it’s so convenient,” but that’s not a quantified metric. You have to be explicit: How many hours can your work time be reduced from, and to what? How many people does it reduce from, and to how many? Or take the targets we set for factories every year—for example, overall labor productivity must improve by five points year over year (that requirement is extremely ambitious and very hard). Can your project support achieving that target?
Projects in the supply-chain domain are actually the easiest to assess, because there are more hard metrics. But in scenarios like customer service, which we just mentioned, evaluation is much more challenging. When we first talk about customer service, people say, “No way—our customers call with highly specialized, complex questions. An AI agent can’t handle that.”
I think that’s actually a misconception. What people call “complex” is often just because you haven’t taught it yet. The first time it definitely won’t work—but what about the tenth time? The twentieth time? What happens when the veteran experts retire? So in many cases, it still comes down to people’s mindset.
All in all, if you can’t even set a clear objective for a project yourself, then that project definitely shouldn’t make it through the selection process. Of course, another factor is how much you’re investing. If, say, the total budget is only 100 yuan, then you rank by priority. In a typical year, we can line up dozens of projects that are actually executable. In the end, the improvements are quite noticeable—it’s really a case of achieving a lot with a small budget.
Liu Xiangming: You just mentioned visual inspection. We used to think visual inspection meant yield wasn’t that high, there was lots of data, and the model could learn quickly. But as you said, Schneider Electric’s quality is very good and the defect rate is very low—so how does it train and learn in that case? And Xudong, what’s your approach? Honestly, we hadn’t really thought about this before—we were thinking it would quickly help raise yield. But if yield is already very high, is this still an open question now, or has it already been solved?
Xiong Yi: After years of accumulation, it’s working very well now. We’re building models and learning from large numbers of samples to identify defective products. On the production lines where it has already been deployed, the false-positive rate of AI-based visual inspection has dropped to within 0.5%, while the miss rate has fallen to 0%. And this solution has also been rolled out to several other plants.
During the improvement process, the curve at the beginning may not be linear, but as you reach the final stage it gradually levels off. The key is that it (the technology) doesn’t depend on people. So as long as you keep investing and keep doing it, the longer you stick with it, the more clearly the return on investment will show up.
This is also what I tell many clients. If a company’s baseline is very low, it may see a very obvious improvement in the early stage. But in reality, the hardest part is that last fraction of a percent—for example, going from 99.1% to 99.3%, then to 99.9% and 99.99%. It’s like Six Sigma, or the power grid’s “six nines” safety requirement—the final stage is extremely difficult. Once you truly reach that stage, you can’t do it without these tools or technologies; there’s no way you’re going to move from five nines to six nines.
Chen Xudong: Yes—so you probably did spend quite some time getting the system’s performance up. IBM has indeed encountered this issue. Because we build platforms, we’ve specifically developed a feature designed for learning in scenarios where the yield rate is exceptionally high.
It first learns the characteristics of qualified products and builds an internal model of a “perfect product.” Then, when it detects image regions that don’t match that authentic-product model, it triggers an alert. This is a kind of reverse thinking: it works by identifying “anomalies” rather than relying on pre-defined defects. That way, it can quickly surface some problematic points and accumulate defect samples faster. So, down the road, we could collaborate in this area.
AI will bring changes to both employees and companies
Liu Xiangming: Mr. Chen, you previously mentioned having business teams propose AI needs first, and then coordinating technical resources to implement them. With this “business-driven” model, how do you avoid a gap in understanding between the technical team and the business departments? In AI projects, what roles should the business lead and the technical lead each take on?
Chen Xudong:AI implementation is a complex process. Most companies start out by experimenting more with open-source tools. I believe President Xiong’s company likely started the same way. After they’ve rolled out a certain number of applications, the company realizes it needs a platform. But at the beginning, most companies don’t think they need a platform—they can get by with all kinds of solutions and models.
That’s when IBM plays a role——you might not think of IBM when you’re building one or two applications, but once you need to manage hundreds of applications, you’ll think of IBM.
When the number of applications grows to a certain scale (possibly even more than the number of employees), how do you manage them? IBM has already prepared for this: in addition to the big Watson platform, we’ve also built WatsonOrchestrate to coordinate and manage these systems. In the future, these systems will also be able to call each other—almost like an internal app marketplace. IBM has put real effort into this. We’ve been thinking ahead, anticipating what problems enterprises will run into next and laying the groundwork in advance. With this kind of platform readiness in place, we can serve enterprises at different stages and of different sizes.
For example, if a company says at the outset, “We’re just starting to experiment,” we might encourage them to go ahead and try—run with some open-source tools first. At that point, the main driver may simply be one person’s idea, and the spend is minimal, so there isn’t much conflict between IT and the business side. After all, it doesn’t cost much, and everyone feels it’s worth a try.
Then, after trying a few applications, they may start moving into more “production-system” environments. At that point, it’s not something you can just casually deploy to production. The IT team needs to put in a lot of effort to keep a close eye on the system, or you need to look for enterprise-grade solutions. For many companies, this is when they begin to think about a platform—so it turns into an IT decision. Once the IT department has built the platform, each department can then do a lot of things on top of it. So I think this process is interactive and evolutionary; it may not be as hard to cross that threshold as people imagine—once you reach a certain point, it just happens naturally.
Liu Xiangming: Final question: We just talked about the future of software. Listening to both of you, it’s clear that getting AI into enterprise use has indeed brought many changes. On the one hand, it now seems like it’s no longer making such a big splash. In the past, deploying a new system was always a bone-deep upheaval—elevated to a life-or-death “are we seeking death or waiting for death” level. Now it seems much less complicated, as if you can casually roll out some lightweight applications; on the other hand, it may be shaking the very foundations of software. The underlying layers of systems could change dramatically.
What do you two think? I’d especially like to hear your views—how do you see the future?
Chen Xudong:Let me start with whether AI tools can replace software development. In fact, some things that happened during the Spring Festival period (referring to the capital market’s concerns that AI could disrupt the software industry) prompted everyone to think about this question.
In my view, given what AI can do today, it still can’t directly replace a complete, complex, enterprise-grade software system. For example, you can’t just say, “AI, build me a Salesforce.com,” or “There’s an ERP—AI, just make an ERP and replace it.” Today’s AI isn’t at that level yet, though it’s hard to say whether it might be in 10 or 20 years, because the technology is evolving extremely fast.
So what can AI do right now? It can help you handle many parts that can be modularized. Those large systems—especially enterprise systems—contain many complex logical relationships inside. Of course, AI might become capable after learning for a while longer, but it would need to be given access so it can learn. So in the end, the ones most likely to build an upgraded, next-generation ERP may still be vendors like SAP, because they have the data for the AI to learn from; for others, it’s not that easy to get access and learn the same way.
So why have (software companies’) share prices started to rebound? It’s because we’ve realized that our customers aren’t particularly worried that “maybe I don’t need you anymore—I can just write a program myself and get it done.” It seems no customer has come to us with that question yet. Because the people serving as CIOs inside enterprises know very well that the program itself is only a small part of the overall IT work.
What we talk about more these days is “IT modernization.”
IT modernization actually covers a lot:
First, hardware modernization has been ongoing all along. If you buy a server today, there’s no way you’ll use it for 20 years. In a few years, new technology will come out, and with the old technology you won’t even be able to find maintenance parts, so you have to upgrade to the next generation of servers. Hardware modernization has never stopped.
Second, why are people relatively lazy about software modernization? Systems at some U.S. airports, for example, are written in COBOL, and they can’t be bothered to change them. Because once the hardware was upgraded, changing the software didn’t seem particularly meaningful at the time—and it was extremely stable, and the system itself hasn’t changed much for decades. In fields like airports and banks that prioritize extreme reliability and stability, systems are very hard to replace.
Third, there are also some newer areas, such as hybrid cloud architectures, containerization technologies, databases, and so on—all of these are part of IT modernization. Ultimately, modernization may also affect changes to organizational structure and processes.
So this isn’t something you can solve simply by using AI to rewrite or translate some code.
Therefore, I think this “storm” over AI’s impact on enterprise software has already passed. However, it may force those companies (software vendors) that originally wrote this code to accelerate their business transformation—and that is certainly an inevitable process.
Xiong Yi: First, I don’t think software and AI are in a substitution relationship. It’s not that once you have AI, no one will buy software anymore, or that large-scale enterprise software will disappear. Because enterprise software has already faced major challenges and changes over the past few years; even without AI, it has been evolving on its own——shifting from a relatively fixed, rigid form that solves basic problems to something that can be flexibly configured, with a relatively stable foundation while upper-layer applications can be developed much more flexibly.
AI is, in fact, a tool that helps this evolution, enabling software to become more flexible. That’s my basic view.
So yes, large-scale enterprise software is still continuing to evolve. At this stage, CIOs may be focused on the question of “Will AI replace it?”, but the real corporate decision-makers—CEOs, chairpersons, and the like—don’t pay that much attention to that. They feel that if it’s good enough, then it’s good enough: if it solves the basics, keeps processes running, supports approvals, and enables shipping, then just keep using it—don’t go poking at it. Because it isn’t the biggest problem they’re facing right now.
What they care more about is: How can I use certain AI tools or approaches to truly increase my business value and product value, or improve customer service satisfaction? They want to use data to do those things, rather than thinking about how to optimize enterprise management software or the software on the production line.
So I still feel that “AI replacing large-scale enterprise software” isn’t a real pain point. For many companies, what we’re dealing with is shifts in market competition. In reality, how to make AI deliver value outwardly—creating value for customers—or how to embed AI into products and applications: that’s what I see as the core value. I’m actually not particularly concerned with whether it will replace those things.
Liu Xiangming: Could I ask the two of you to give some core recommendations to all the companies that are on the way toward intelligent transformation? Three points each—helping them seize this opportunity, avoid wasting money, and push forward in a systematic way.
Chen Xudong: 1) Strengthen the digital foundation: You have to build a solid digital base first and ensure the data foundation; otherwise AI will be hard to realize, and you’ll end up doing twice the work for half the result.
2) Try proactively and build firsthand feel: No matter how big or small the steps are, you must start exploring and practicing AI. Gain direct experience and understanding through real use (such as deploying AI tools), and avoid armchair theorizing.
3) Choose a platform when scaling: For large enterprises, when moving from pilots to scaled rollout, you should choose a suitable platform. This protects early-stage investment and prevents every project from becoming a separate, new investment—enabling low-cost, manageable internal expansion.
Xiong Yi: 1) Platform mindset (holistic mindset): Oppose fragmented, point-solution AI buildouts, and emphasize starting from the enterprise-wide perspective to build a unified platform.
2) Scenario-driven execution and ROI: Emphasize that AI applications should focus on clear scenarios that can generate value quickly, with a clear return on investment (ROI)—staying proactive in experimentation while avoiding blind spending.
3、Continuous accumulation and iteration: Emphasize the systematic buildup of internal knowledge, data, and talent, and turn project experience into reusable, repeatable organizational capabilities through approaches such as building communities.
Liu Xiangming: In the new year, what are the strategic priorities for Schneider Electric and IBM?
Chen Xudong: Our strategy is actually very clear. The overarching strategy is hybrid cloud and AI. We will go all-out to help Chinese enterprises with digital transformation—especially those willing to adopt our services. In terms of customer selection, we will place greater emphasis on private enterprises and multinational companies.
Xiong Yi: The company is committed to a three-to-five-year strategic focus. Our core direction is to use energy technologies to provide electrification, automation, and digital-intelligence solutions for every industry, enterprise, and household, driving efficiency and sustainable development. That is the overarching strategic direction.
Specifically, in the China market, our strategic emphasis is shifting from providing general-purpose equipment to focusing on specific industries—especially high-potential, high-value sectors such as data centers, electronics and semiconductors, food and beverage, and life sciences.
Within these industries, our priority is to help build a new-type power system, with software, digitalization, and AI-driven capabilities as top priorities. To that end, we have set up dedicated software and digital R&D centers in China (such as in Yizhuang, Beijing).
Three major strengths of the China market:
Deep technological capabilities: Especially in new-energy fields (such as photovoltaics, energy storage, and electric vehicles), China has world-leading advantages.
Rich application scenarios: China has the world’s most diverse industrial manufacturing scenarios, making it the best testbed for deploying products and technology solutions.
Integration of “technology + cost”: Through the “DesigntoCost” approach, we combine technological leadership with cost advantages to create competitive products.
That’s why our R&D investment and focus in China are aimed at creating products that are both technologically leading and cost-effective. We emphasize “DesigntoCost”—building cost advantages into the design from the very beginning. This balance works extremely well: it helps us develop offerings for new-type power systems and software-driven digitalization, serving industry customers and Chinese companies going global, while also extending our reach into overseas markets through truly multinational corporations. That, broadly speaking, is our strategic direction.
Liu Xiangming: Final question: we’ve been talking for quite a while today. From your respective fields, what potential collaboration opportunities do you see?
Chen Xudong: I can already see at least two opportunities. One is in the area of visual inspection—our technologies are still complementary. We place more emphasis on platforms and software, so there’s room to collaborate there, and we can serve our customers together. The second is what you just mentioned: in the AI space, you’ve already reached the stage where you’re starting to look for a platform. At this stage, you’re IBM’s best kind of customer. Because if you look globally at enterprise-grade platforms right now, IBM is probably the only company truly doing this in a real way; others haven’t thought it through end to end—how to handle data, how to build models, how to build agents, and how to do governance to manage the entire AI system. For enterprise applications, IBM is unquestionably a leader, with a complete system that enables enterprises to grow faster on this platform.
And IBM has been continuously acquiring the latest, highly capable software of this era. For example, we recently announced an $11 billion acquisition of Confluent. When it comes to AI applications, you basically can’t do without this kind of foundational software. It’s also open-source, so we can provide you with more services—which is especially well-suited to China. Because China has very demanding requirements for code, and with open source, providing services may open up a new space for us. For companies in sectors like distribution, where AI data needs real-time updates, these are massive application scenarios. So I think there should be quite a lot of opportunities for collaboration.
Xiong Yi: Yes, I strongly agree. Whether for us or for many of our customers, we’re now advocating that everyone plan their enterprise AI platform properly. What Mr. Chen just said really resonated with me. In fact, last year we also worked with IBM to co-publish a report we called “AI for Green.” At its core is the “GROWTH” model: G stands for Growth, R for Resilience, the first E for Efficiency, the second E for Environment, and N for New Horizon—meaning achieving breakthrough innovation and business models through AI. I think we’ve already started doing some joint work in thought leadership and market education.
Going forward, this is truly the moment when companies need to turn it into reality. We’ve just offered a lot of suggestions to these companies, but talk alone isn’t enough—what’s really needed is for everyone to work together on practical, implementable projects. When it comes to designing and building this kind of enterprise-grade platform, IBM is certainly a leading partner.






快报
根据《网络安全法》实名制要求,请绑定手机号后发表评论