At re:Invent, AWS introduces new chips, AI capabilities, and improved cloud offerings.

Experts in cloud computing, clients, and tech enthusiasts have flocked to Las Vegas for the event, which runs through December 1.
In an effort to expand its portfolio and compete with rivals like Microsoft, Oracle, and SAP for market share, Amazon Web Services has announced the development of faster chips, the most recent generative artificial intelligence capabilities to boost productivity, and the general availability of Amazon S3 express one zone, its newest high-performance cloud storage service.

The cloud belongs to all of us. At AWS’s annual re: Invent event in Las Vegas, Nevada, CEO Adam Selipsky stated, “Customers of all sizes, all industries, and from all regions are using AWS cloud.”

Top businesses use the AWS cloud. Our clientele is drawn from a variety of sectors, such as the financial, medical, and automotive industries.
The event runs through December 1st, and cloud computing experts, customers, and tech enthusiasts have converged on Las Vegas.

The lowest latency cloud object storage is now offered by the new Amazon S3 express one zone, and users can scale their storage up or down as needed.

According to Mr. Selipsky, it provides data access speeds up to ten times faster and is up to fifty percent less expensive than the Amazon S3 standard.

With millions of users worldwide, Amazon S3, one of the most well-known cloud storage services, was introduced 17 years ago.

On average, over 100 million requests are made per second, and it can store over 350 trillion objects.
Emerging use cases that call for writing and gaining access to data millions of times per minute, like financial model simulations, interactive analytics, machine learning and AI training, real-time advertising, and media content creation, are well suited for it.

According to James Kirschner, general manager of Amazon S3 at AWS, the goal of Amazon S3 express one zone is to lower request and compute costs while providing the “fastest data access speed for the most latency-sensitive applications and enable customers to make millions of requests per minute for their highly accessed data sets”.

The largest cloud service provider in the world, Amazon, saw strong sales growth in the September quarter.

In the third quarter of the year, its revenue amounted to $23.1 billion, indicating an annual increase of over 12.2 percent.
The company also unveiled AWS Trainium2 and AWS Graviton4, the next generation of AWS-designed chip families. “Advancements in price performance and energy efficiency” are what the new chips are meant to provide for a variety of customer workloads, such as generative AI and machine learning training.

Graviton4 offers 75 percent more memory bandwidth and up to 30 percent better computing performance than Graviton3 processors, which are still in production today.

This is AWS’s fourth chip generation in five years, and according to the company, it’s the “most powerful and energy efficient chip the company has ever built” for a variety of workloads.

In contrast to the first-generation Trainium chips, Trainium2 is intended to provide training at a rate up to four times faster. According to AWS, it can double energy efficiency and train large language models and foundation models in a fraction of the time.

David Brown, vice president of compute and networking at AWS, stated that since silicon is the foundation of every customer workload, it is an essential area for innovation.

“We are able to provide our customers with the most cutting-edge cloud infrastructure by concentrating our chip designs on real workloads that matter to them.”

Speeches, innovation talks, builder labs, workshops, demos, and service announcements are all part of the 12th annual re: Invent event. There are about 50,000 people in attendance in person, and 300,000 people are watching the event online.

Nearly 50,000 attendees are attending re:Invent in person, while 300,000 are following the event online. Photo: AWS

New generative AI features have also been announced by AWS for Amazon Connect, the cloud contact center that helps businesses provide better customer experiences at lower costs.

According to AWS, the new features—which are powered by large language models—aim to revolutionize the way businesses offer customer service.

For quicker, more precise customer service, Amazon Q in Connect, for instance, will offer agents suggested answers and actions based on inquiries from customers in real time.

With AI-generated summaries that identify sentiment, trends, and policy compliance, the Amazon Connect contact lens assists in identifying the crucial elements of call center conversations.

An additional feature Contact center managers can quickly develop new chatbots and interactive voice response systems with Amazon Lex by utilizing natural language prompts. By providing answers to frequently asked questions, it could enhance current systems.

Pasquale DeMaio, vice president of Amazon Connect at AWS applications, stated that the contact center sector is set to be “fundamentally transformed by generative AI,” providing customer care representatives with new avenues to provide personalized customer experiences.

“Yet, very few organizations possess the sophisticated machine learning know-how to quickly and effectively integrate this technology into their daily operations.”

Additionally, the business unveiled Amazon Q, a brand-new generative AI-powered assistant designed to “streamline tasks, speed decision making, and spark creativity, built with rock-solid security and privacy,” according to Mr. Selipsky. Amazon Q is intended to deliver actionable information in real time.

Leave a Reply

Your email address will not be published. Required fields are marked *