Generative AI is the big trending topic right now, and understandably is featuring prominently in the news. The popularity of platforms such as Open AI’s ChatGPT, which set a record for the fastest-growing user base by reaching 100 million monthly active users just two months after launching is unquestionably on the minds of businesses globally.
These tools can increase productivity and efficiency by automating repetitive tasks and letting employees focus on higher-value work. They can foster enhanced creativity and innovation by assisting in brainstorming and ideation processes and generating novel solutions to complex problems. Today, AI’s applications have already been well-documented in fields such as eCommerce, security, education, healthcare, agriculture, gaming, transport, and astronomy. The business, productivity, and efficiency gains that it provides these industries are enabling them to flourish and open up new revenue streams.
But while generative AI tools bring a world of possibilities, they also open the door to some complex security concerns. For example, generative AI often requires access to vast amounts of sensitive data, which poses significant data privacy and protection challenges. Mishandling of, or unauthorized access to, these datasets can lead to breaches, regulatory penalties, and damaged reputations.
Using generative AI safely
To this point, at Zenith Live in Las Vegas last month, our technology partner, Zscaler’s EVP and Chief Innovation Officer Patrick Foxhoven talked about the potential risks associated with AI. Patrick was quick to point out that AI is not new to Zscaler, the company has been leveraging the technology for many years now, and he said it does have the potential to change everything.
However, he also stated that both deepfakes and data loss can be enabled by the same generative AI capabilities. Patrick talked about the importance of enabling customers to use generative AI safely and how Zscaler has added a new URL category and cloud app for tools like Bard, ChatGPT, and others. This allows admins to finely control who can access these tools and enforce browser isolation to protect against sensitive data being uploaded.
Getting smart about cyber risk and investment
Additionally, Zscaler also provides risk scores for commonly used apps to determine if their AI integrations pose a threat based on the application’s security posture and data retention policies. Furthmore, AI insights generated by Zscaler’s new Risk 360 platform can help security prioritize, isolate, and implement policies for preventing future process iterations.
Zscaler Risk 360 is a comprehensive tool designed to help security leaders quantify and visualise cyber risk. It looks at an organization’s security posture, based on data and analytics, enabling them to build a risk profile based on their security posture, with a better understanding around the financial implications of cyber risks.
What I feel is particularly beneficial about this tool for customers is that it can be used as an aid to help fund projects because it enables security leaders to be smarter about where they put their dollars and invest. It also enables them to have a meaningful dialogue with the board and secure funding based on insight that demonstrates what the impact of a breach might be.
Will AI steal our jobs?
But there are also many who are cautious, even highly concerned about AI, and that AI will take our jobs. However, IBM has reassured us that the day when humans are completely replaced by AI is a long way off. That said, the US actors’ union had 160,000 members on strike since last week, afraid that AI will lead to far fewer employed actors in the future as studios use AI to create “digital twins” of actors.
Likewise, AI is a big issue for writers, especially with ChatGPT being used to write everything from law school and business school papers to legal briefs, with varying degrees of success. And winning limits on AI is an issue for the Writers Guild of America, which has been on strike against studios and streaming services since May.
There are a wealth of industry predictions on the impact that AI will have on society between now and 2030, but the speed at which AI has started to impact our everyday lives makes me think that self-imposed deadline should be brought forward. Who knows what the applications of AI will look like next year, let alone in six-and-a-half years.
Consolidating vendors and eliminating point solutions
What I’m also finding in the current economic climate is that customers are crying out for integrated, comprehensive solutions so they don’t have to deal with multiple point products that don’t work with each other. This is one of the guiding principles for Zscaler and many of its new offerings haven’t been cobbled together from a string of acquisitions to add functionality in areas that were lacking. Likewise, they haven’t simply been built to extend product lines and create additional revenue streams. Nor are they attempting to capitalize on this latest buzz surrounding AI. In fact, they capitalize on Zscaler’s massive cloud security data lake for training sophisticated AI models to provide advanced insights for customers. These insights were always present in the more than 300 billion transactions and 500 trillion daily signals seen by the Zscaler Zero Trust Exchange every day. Now AI simply allows Zscaler to process and serve these transactions to users in a scalable, intuitive, and actionable way.
Ultimately, AI presents a wealth of opportunities and challenges for individuals, organizations, and governments around the globe, and it will be interesting to see how AI continues to evolve in the months and years ahead and whether it is viewed as a threat or an opportunity to innovate by businesses.
By Brian Ramsey, VP America, Xalient