Submit Content Become a member
Dave Simpson

AI

The sudden and thrilling emergence of generative AI is making businesses and governments across the world rethink management of their mission critical systems. In particular, how they use AI to manage and engage with the cloud.

 

Every day I get calls from customers asking for help with AI use cases, from predictive marketing analytics, to drug discovery, to claims or loan risk assessments, and even machine vision. You name it, it’s being considered. It seems everybody has got a project in mind, but very often they don't have the talent to execute it.

 

It’s as though generative AI has snuck up on us from nowhere. We’ve seen a remarkable explosion of creativity and possibility, but also much bluster in the market about what efficiencies and productivity benefits are immediately achievable through AI and the work involved to get there. We also must not ignore the tremendous complexity of organisations’ cloud environments when considering what’s possible.

 

Many enterprises’ budgets are not yet constructed for a big reallocation to AI, and even if they were to carry out a specific project, they may not have a clear mechanism for prioritising the AI projects in terms of impact.

 

Then there is the issue of the location and the quality of the oceans of data washing around an enterprise’s IT estate, and the difficult task of extracting and untangling the relevant data that will train the AI model accurately to generate an optimal result.

 

Indeed, not many people can reliably tell you whether a particular large language learning model is going be successful at delivering a particular use case or not.

 

However, for all this uncertainty, it is reassuring that in many ways history seems to be repeating itself.

 

The AI revolution of the last 12 months reminds me of the cloud revolution when it began a decade ago. It was clear to enterprises then that they needed to start their journey at that point. In the first few years, they started out by testing a few things in a small part of their business just to learn. They learnt that the agility benefits of delivering digital function faster may be as valuable than cloud elasticity based cost reductions.

 

I recommend a similar incremental learning approach to enterprises now as they navigate this next phase of technology transformation using AI. Rather than charging head-on into a multitude of shiny use cases, the best way forward is to identify potential opportunities, and focus on a few simpler ideas. This is why we have made substantial investments in operational AI (AIOps) to improve the availability of mission crititcal systems, and to reduce the total cost of ownership using FinOps approaches.

 

For public-facing enterprises it is also essential to take a holistic view of your tech estate to understand the total cost of ownership of your end customer’s entire digital experience, and therefore how AI might reduce it.

 

For instance, it is rare to hear anyone talk about the mainframe as being part of cloud cost management, but the number of MIPS (million instructions per second) you use on the mainframe can be driven by a bad design going to the mainframe from a front-end, digital application running on the public cloud. I see cost optimisation as a hybrid IT cost model optimisation problem across applications, data, AI and all infrastructure including mainframes.

 

Banks use mainframes to process transactions, but no consumer is logging into the mainframe to do those transactions; they are transacting through an app on their phone. If the phone's queries to the mainframe are not carefully structured, it could be driving up mainframe usage needlessly.

 

This is why it’s important to approach modernisation and cost optimisation holistically. In this space, the data sources have grown incrementally over decades and it is important to strengthen the observability. Older IT environments were obviously not designed with the same level of instrumentation as newer ones, and integrating them well was not top of mind when the new environments were being built as quickly as possible.

 

Targeted and implemented correctly, AI has the capability to generate these sorts of insights and then self-heal to optimise the entire IT estate, not just a portion of it.

 

On the Generative AI side, it is very beneficial to have governance and risk management in place before launching large scale transformation projects. The data security protections and privacy protections in place at many enterprises are often insufficient for aggressive usage of these technologies. There are good use cases I have seen on raising developer productivity in operations and analytics where rapid prototyping or one time analytics queries can be developed quickly while masking the databases or code bases to prevent leakage. End user tools focused on summarisation of non-confidential documentation and quality verification are good data restricted use cases that can bring quantifiable benefits. The triangle of growth, profit and risk is balanced in different ways in the two domains of use cases.

 

But delivering that level of capability also requires a major culture change in order to break down historic organisational silos.

This moment, with the emergence of generative AI, provides a generational opportunity to take a pause and consider what kind of cultural and technological transformation is required to make all these environments sing in efficient and beautiful harmony.

 

I’m excited to see how Australian enterprises, so often at the forefront of tech innovation, seize this moment and deliver tangible results to their business and customers.

Rate article from Dave Simpson: