“With AI, Cloud Is No Longer Just About Reducing Costs” – Dr Amruta Joshi, Google Cloud

0
60
Dr Amruta Joshi, Director, AI Solutions, Google Cloud
Dr Amruta Joshi, Director, AI Solutions, Google Cloud

Gone are the days when IT decision-makers saw the cloud as a cost-efficiency tool. With the arrival of artificial intelligence, CEOs are now looking at the cloud to create value. OSFY’s Yashasvini Razdan spoke to Dr Amruta Joshi, Director, AI Solutions, Google Cloud, at Open Source India last year to understand how developers and enthusiasts can leverage open source to navigate through this rapidly evolving landscape….


Q. What does open source mean for Google Cloud’s AI strategy?

A. Open source is a core component of our strategy, and Google Cloud remains highly committed to it. We’ve developed a series of products that actively support open source, such as GKE (Google Kubernetes Engine), which facilitates numerous open source deployments. Open source is indeed a top priority for us, and we’re dedicated to supporting open source ecosystems in every way possible.

Q. What, according to you, are the emerging trends in this open source AI landscape?

A. I’d identify three major emerging trends in this open source AI landscape. The first is the rise of open-weight models. These models are being launched every month and are attracting substantial interest and providing more flexibility for users. Examples include Gemma and other community-driven models, including a time series model recently launched by Google Research.

The second trend is the growth of community-driven platforms, which are making collaborative AI development easier and more widespread. In addition to Google Cloud’s open source community, platforms like GitHub and Hugging Face are fostering collaborative efforts to push the boundaries of open source innovation.

The third trend is the increasing focus on responsible AI within open source. The AI community is dedicating a lot of thought to ethical deployment, emphasising that as AI evolves, responsible practices should develop alongside it to ensure safe and fair use of these powerful technologies.

Q. How has cloud technology evolved from a customer’s perspective?

A. We work with SMEs, large enterprises, and individual developers, who aren’t merely looking at cloud as an optimisation tool but as a significant value driver. Earlier, the cloud was primarily about reducing infrastructure costs, with conversations limited to IT departments. Now, with AI, it’s about increasing revenue and adding value for customers. These discussions are happening at the CEO level. Missing this shift could make it difficult for companies to catch up later. The evolving role of cloud from cost-savings to value creation is the real advantage AI has introduced into this landscape.

Q. What does cloud technology mean for the SME segment?

A. SMEs often have the capability for customisation and offer unique solutions to their clients. This area is evolving quickly, so decisions made today may need adjustments in a year or two. What’s essential is having a platform that evolves with you, providing next-generation updates as they become available. To support the requirements of SMEs, we provide a variety of APIs across levels and diverse models, from lightweight, cost-effective options to more advanced custom models. SMEs can select options based on their specific needs, and that flexibility—being able to deploy where and how they want—is at the heart of what Vertex AI and Google Cloud offer.

Q. What is the role of cloud technology in democratising and scaling AI through open source solutions?

A. I love the term ‘democratising AI’—it captures one of the broad, emerging themes we’re seeing as the direction of the industry. AI is transforming every industry, which means it must be accessible to everyone, from large enterprises to small companies and individual users. Our cloud platform makes this possible by bringing Google’s leading technology onto a flexible, accessible platform. With both low-code and high-code, and even no-code options, users don’t need to be developers or deep tech experts to utilise AI. Open source is a top priority for us, especially given the developer community here at this conference (Open Source India). We believe a lot of innovation will come from the talent we’re seeing here, and providing them with a robust AI platform is certainly one of our key objectives.

Q. How does Google Cloud ensure secure and responsible use of open source and AI together?

A. Responsible AI is at the core of what we do at Google. Two aspects I will touch on are open models and community collaboration. Google Cloud’s open source Gemma models come in various versions. These models have open weights, allowing users to download, modify, and finetune them, offering extensive flexibility. They are also available as multimodal models, enabling combined processing across text and images.

For Gemma models, we provide toolsets that support responsible AI features. For instance, Shield Gemma automatically categorises content, identifying hate speech or inappropriate material, and provides tools for efficient implementation. This is particularly helpful for companies handling content—be it images or text—who may not have dedicated teams to monitor and update these techniques. Tools like Shield Gemma, supported by Google and the open source community, allow users to integrate state-of-the-art content moderation within their framework, ensuring it remains current with the latest advancements. This highlights the flexibility of these tools, which can be implemented within any framework in use.

Q. How can the community ensure responsibility?

A. About two years ago, the understanding of responsibility in AI was quite fragmented. Today, even with varied interpretations, there is broader consensus on core principles like fairness and defining inappropriate content. This alignment on principles has become a solid foundation. Governments are setting standards, with the EU and the US leading in these efforts. India is also exploring relevant policies. This regulatory involvement helps define what responsibility in AI truly means, which is an essential first step.

The challenge now lies in enforcement of responsible AI at scale. We need platforms and tools for automated detection and enforcement. Within the open source community, enthusiasm around model creation has surged over the last year, and I’m optimistic about seeing more responsible AI tools emerge. These tools will play a key role in evaluating how models are used and deployed. Responsible AI tools will mark the progress in enforcing responsibility within open source AI development.

Q. What kind of strategies do you think companies should adopt when they’re trying to secure their cloud deployments?

A. I believe companies should adopt a platform mindset. With a platform approach, you gain a comprehensive set of tools and built-in security options, which are incredibly important. You can choose to engineer every component yourself, but it requires hiring security experts and investing in highly specialised tools to ensure your system’s security. For most smaller businesses, where technology may not be their primary focus, it’s advisable to rely on the security options provided by the platform and choose the right platform from the start.

Q. Could you elaborate on the impact of AI on talent and skill development in the cloud landscape?

A. I believe we don’t have a shortage of jobs but a shortage of skills. Upskilling is essential, and it should go beyond individual efforts. While it’s encouraging to see developers engaging in Qwicklabs and AI courses, talent development should focus on every part of the organisation, asking how each role can use new platforms and technologies. Organisations that embrace this technology will achieve much more than those that avoid it.

Q. What kind of partnerships does Google Cloud have within the open source community?

A. We collaborate with open source partners at every level within the Google Cloud Vertex AI platform. For instance, we integrate third-party models from Meta, specifically the Llama models, into the Vertex AI platform. Users also have the option to use non-Google models, such as Anthropic and those available on Hugging Face.

Within Vertex AI, users can perform extensive customisations, including adapter tuning, and enjoy flexibility in deployment. Models can be deployed using Vertex AI, BotX, or even on your infrastructure with GKE, Google’s Kubernetes engine. GKE doesn’t require Google’s infrastructure, as Kubernetes allows you to deploy on your own setup. This extensive flexibility extends across all layers—from infrastructure and platform to application—and Vertex AI integrates these options.

Q. What advice would you like to give to developers who want to upskill themselves using AI?

A. We’re entering an era of continuous learning, where new skills will need to be acquired every few months, and agile learners will likely be more successful. Hands-on development skills as well as core skills in coding and software development are still very important. I would also emphasise staying up-to-date with new APIs and trying them out. In terms of open source tools, I recommend starting with Google Cloud’s learning resources. We offer a variety of free courses, including hands-on labs through Qwicklabs, which are very effective for hands-on learning. If you’re serious, consider the certification for Professional Machine Learning Engineer. These structured learning paths can help you approach the field systematically; otherwise, the volume of content out there can be overwhelming.

Q. Do you consider these pointers when hiring new recruits?

A. A strong foundation in technology and a passion for learning are really important. Many of our hires come from data analytics or infrastructure backgrounds and then upskill in AI, especially in areas like GPU tuning and AI-enhanced security. So, if you’re a software developer, consider upskilling with AI in your domain, which can make you more versatile and valuable.

Q. What brings you to Open Source India?

A. We’re here because we believe the vibrant developer community in India will drive substantial progress in AI for companies worldwide. I encourage everyone at the conference to embrace AI learning and innovation to advance both the technology and the country in the next wave of global innovation.