Tuesday, January 30, 2024

CIO struggles: Communicating effectively to your CFO or C-Suite

During my tenure as a CIO, the yearly budgeting season was always a source of apprehension, primarily due to the inevitable query from the CFO or the C-Suite Team about the tangible benefits of our ever-expanding IT department or justifying new project initiatives/investments. This is a common concern for CIOs, who often grapple with justifying IT investments, especially when these expenditures constitute a significant portion of a company's total revenue, ranging from 1% to over 50%.

 Answering the value of IT investments is particularly challenging for many IT departments. The crux of the matter is that IT develops systems primarily utilized by other departments to boost sales, cut costs, or gain a competitive edge in the market. Typically, an IT leader might respond with a broad statement about how the IT department has supported corporate strategic objectives through various projects. Unfortunately, this claim is often unsubstantiated by complex data. So, the question remains: how should a CIO navigate this situation? 

 IT as a Strategic Business Component 

 To address this, there are two primary strategies. The first involves transitioning from a model where IT absorbs all development costs to one where these costs are allocated to user departments based on resource usage. In this scenario, IT functions as a zero-cost department, sidestepping annual budget complications. The drawback, however, is significant. This method can fragment the automation agenda, making it department-centric rather than a cohesive company strategy. This is particularly problematic with company-wide systems like AI, where the impact spans all departments. Moreover, in a charge-out system,   

IT must issue bills to each department covering development, infrastructure usage, and overhead costs. This billing process can strain inter-departmental relationships significantly if the expenses exceed budget projections. Furthermore, this approach risks incentivizing departments to seek external IT solutions, potentially leading to disjointed internal systems and undermining the company's unified automation strategy. A More Effective Approach A more effective method is to assess IT's efficacy, holding it to the same standards of corporate oversight as other departments. Like how the advertising department's impact on sales is evaluated or HR's salary system is compared with industry standards, IT should be scrutinized similarly. 

 The effectiveness of IT can be gauged through post-implementation audits of significant system projects. These audits, conducted a year after a system goes live, involve a thorough analysis to verify that the objectives and ROI were achieved. The audit process can be complex and time-consuming, especially if the original project team has undergone changes. For example, a project implementing a new customer relationship management system could be audited for its impact on customer retention rates and sales cycle times. Another example might be deploying a new enterprise resource planning system, where the audit could assess improvements in supply chain efficiency and reductions in operational costs.

 Involving Cross-Departmental Leadership and the Operations Director

 Crucial to this approach is the involvement of cross-departmental leadership and the Operations Director in the auditing process. This collaboration ensures a comprehensive and multi-perspective analysis of IT projects. For instance, the Operations Director can provide insights into how IT initiatives have optimized operational processes, enhanced efficiency, or reduced bottlenecks. Consider a scenario where IT deploys a new inventory management system.

 The Operations Director and leaders from the logistics and procurement departments could collaborate in the post-implementation audit. Their collective insights would evaluate the system's direct impact on inventory management and its broader implications for supply chain efficiency and procurement processes. 

 Involving the Risk Committee 

 An integral part of this approach is the involvement of the organization's risk committee, typically a part of the board. This committee is crucial in supporting IT investments and recognizing and mitigating risks associated with these initiatives. Their participation ensures that IT projects align with the organization's broader risk management framework and contribute to its
CIO Struggles
Security and resilience. The user department and IT might be hesitant to conduct these audits for various reasons, including potential discrepancies in the initial ROI projections or reluctance to revisit past decisions regarding headcount reductions.

 The ideal approach for conducting these audits is through an independent body, ideally part of the company's financial division. Having been involved in the initial ROI calculations, this group can ensure a neutral and accurate assessment. By adopting this method, the user department and IT are held accountable for their commitments, and the CIO can confidently respond to queries about IT investments. For instance, the CIO could report to the CFO or the C-Suite Team: "This year, we launched 10 projects, resulting in a 25% increase in sales and a 15% reduction in expenses." Now, that is a positive impact of such a conversation, not just with the CFO or the C-Suite Team but across the entire organization.

CIO thoughts :) 

Grammatically edited with Grammarly and OpenAI. (2024). ChatGPT (4) [Large language model]. https://chat.openai.com
Graphic created by DALLE

Wednesday, January 24, 2024

More AI Thoughts and Learning

AI encompasses many aspects. Generative artificial intelligence (AI) and extensive language models (ELMs) like ChatGPT represent just one facet of AI, but they are the well-known segment of artificial intelligence. In numerous ways, ChatGPT brought AI to the forefront, generating widespread awareness of artificial intelligence as a whole and accelerating its adoption.

You're probably aware that ChatGPT wasn't constructed overnight. It's the result of a decade of effort in deep learning AI. That ten-year period has provided us with novel ways to utilize AI, ranging from applications that predict your typing to self-driving cars and algorithms for groundbreaking scientific discoveries.
AI’s extensive applicability and the popularity of ELMs like ChatGPT have information technology (IT) leaders inquiring: Which AI innovations can provide business value to our organization without depleting my entire technology budget? Here is some guidance.

AI Options
From a high-level perspective, here are the AI alternatives: 1. Generative AI: The cutting-edge

Prominent generative AI leaders, such as OpenAI ChatGPT, Meta Llama2, and Adobe Firefly, employ ELMs to generate immediate value for knowledge workers, creatives, and business operations. Model sizes: Ranging from approximately 5 billion to over 1 trillion parameters. Ideal for: Transforming prompts into fresh content. Drawbacks: Can sometimes produce hallucinations, fabrications, and unpredictable outcomes.

2. Deep learning AI: An emerging workhorse

Deep learning AI employs the same neural network structure as generative AI but lacks the ability to comprehend context, compose poems, or create illustrations. It offers intelligent applications for translation, speech-to-text conversion, cybersecurity monitoring, and automation. Model sizes: Varying from millions to billions of parameters. Ideal for: Extracting meaning from unstructured data like network traffic, video, and spoken language. Drawbacks: Not generative; model behavior can be opaque; results can be challenging to elucidate.

3. Classical machine learning: Patterns, forecasts, and decisions

Classical machine learning serves as the proven foundation for pattern recognition, business intelligence, and rule-based decision-making, yielding explicable outcomes. Model sizes: Utilizes algorithmic and statistical approaches instead of neural network models. Ideal for: Classification, pattern identification, and forecasting results from smaller datasets. Drawbacks: Lower accuracy; the source of basic chatbots; unsuitable for unstructured data.

5 Strategies to Harness ELMs and Deep Learning AI

While extensive language models (ELMs) are making headlines, every type of AI—generative AI, traditional deep learning, and classical machine learning—holds value. How you leverage AI will fluctuate based on the nature of your business, your production, and the value you can generate with AI technologies.

Here are five strategies to employ AI, ranked from the simplest to the most challenging.

1. Utilize the AI integrated into your existing applications.
Business and enterprise software providers like Adobe, Salesforce, Microsoft, Autodesk, and SAP are embedding multiple AI types into their applications. The cost-effectiveness and performance of utilizing AI within your existing tools are challenging to surpass. Example: Imagine you run an e-commerce website that wants to offer chatbot-based customer support. Instead of building a chatbot from scratch, you can use an AI-as-a-service platform like Dialogflow by Google. Dialogflow provides a natural language understanding system that allows your chatbot to understand and respond to customer queries. You simply integrate Dialogflow's API into your website, and you have a functional chatbot without the need to develop complex AI algorithms in-house. This approach saves development time and resources while still providing a valuable AI-driven customer support solution.

2. Embrace AI as a service.
Embracing AI as a service refers to leveraging external AI platforms and solutions that are accessible through APIs or cloud-based services. These services provide pre-built AI capabilities that can be easily integrated into your applications or workflows. Example: Consider a marketing analytics company that needs to analyze customer sentiment from social media data. Instead of building a sentiment analysis model from scratch, they subscribe to an AI-as-a-service platform that offers sentiment analysis APIs. They integrate this service into their analytics platform, allowing them to quickly and accurately gauge customer sentiment without investing in extensive development.

3. Develop a customized workflow with an API.
With an application programming interface (API), applications and workflows can tap into top-tier generative AI. APIs simplify the extension of AI services internally or to your customers through your products and services. Example: A content creation company wants to automate the generation of product descriptions. They use a language generation API to create a custom content generation workflow. This API enables their writers to provide a brief description, and the AI generates detailed product descriptions, saving time and enhancing content quality.

4. Retrain and fine-tune an existing model.
Retraining proprietary or open-source models on specific datasets generates more concise, refined models that can produce precise results using cost-effective cloud instances or local hardware. Example: A retail company wants to improve its demand forecasting. Instead of building a new model, they take a pre-trained demand forecasting model and fine-tune it using their historical sales data. This approach allows them to tailor the model to their specific business needs, resulting in more accurate forecasts.

5. Train a model from scratch.
Training a model from scratch involves developing a custom machine learning or deep learning model tailored to your specific needs. While this can be resource-intensive, it offers complete control over the model's behavior and can lead to highly specialized solutions. Example: In the healthcare industry, a research organization needs an AI model to diagnose rare genetic disorders from genomic data. Since existing models lack the necessary specificity, they embark on training a custom deep learning model using their extensive dataset. This customized model becomes highly proficient in identifying rare genetic mutations, aiding in early diagnosis and treatment.

Choosing the Optimal Infrastructure for AI
The appropriate infrastructure for AI hinges on numerous factors, including the type of AI, the application, and its consumption. Aligning AI workloads with hardware and employing purpose-specific models enhances efficiency, boosts cost-effectiveness, and diminishes computing requirements.

From a processor performance perspective, the goal is to deliver seamless user experiences. This entails producing tokens within 100 milliseconds or less, equivalent to around 450 words per minute. If results take longer than 100 milliseconds to materialize, users detect delays. By using this metric as a standard, many almost real-time scenarios may not necessitate specialized hardware. For example, a prominent cybersecurity provider developed a deep learning model to identify computer viruses. Financially, deploying the model on GPU-based cloud infrastructure proved impractical. After engineers optimized the model for the built-in AI accelerators on Intel® Xeon® processors, they managed to scale the service to secure every firewall using more affordable cloud instances.

Recommendations for Implementing AI

Generative AI represents a once-in-a-generation upheaval akin to the advent of the internet, the telephone, and electricity, although it is advancing at a considerably faster pace. Organizations of all sizes must harness AI as efficiently and effectively as possible, but this doesn't always necessitate significant capital investments in AI supercomputing hardware.
1. Select the appropriate AI for your requirements. Avoid using generative AI to address a problem that classical machine learning has already solved. Example: A logistics company needs to optimize its delivery routes. While generative AI can generate creative solutions, this problem can be efficiently solved using classical machine learning algorithms designed for route optimization. It's essential to choose the right tool for the specific task at hand.

2. Match models with specific applications. Retraining, enhancing, and optimizing models improve efficiency, enabling cost-effective operation on less expensive hardware. Example: A manufacturing company wants to predict equipment failures to prevent downtime. They start with a pre-trained predictive maintenance model and fine-tune it with their equipment data. This tailored model not only improves accuracy but also runs efficiently on their existing server infrastructure.

3. Utilize computational resources prudently. Whether operating in the public cloud or on-premises, prioritize efficiency. Example: A financial institution uses AI for fraud detection. By optimizing their AI algorithms and deploying them on cloud instances with the right amount of computing power, they reduce operational costs while maintaining high accuracy in detecting fraudulent transactions.

4. Commence with small-scale efforts and secure early victories. This approach allows you to acquire proficiency in using AI, initiate a cultural shift, and generate momentum. Example: A small e-commerce startup begins by implementing a basic recommendation system powered by machine learning. As they gather data and refine their AI algorithms, they gradually expand their AI initiatives, achieving incremental successes that build confidence within the organization.

These examples illustrate how organizations can apply AI strategies to address specific challenges, leveraging a range of AI approaches, from pre-built solutions to custom model development, while optimizing costs and maximizing efficiency.