- Greetings to all. It's a beautiful day here at Texas, and in fact it's an amazing day to talk about Container Observability and how each and every one
of you can become successful in monitoring your container environment using AWS native solutions. My name is Arun Chandapillai, and with me today I have
the most amazing Lucy. Lucy, please take it from here. - Thank you, Arun, as always, and thanks for everybody joining us today. We're very excited you're being here and I hope you are as well. M
y name is Lucy Hartung, I'm the Worldwide Business
Development Specialist for AWS Observability. In today's session, we're going to cover
some of the challenges in monitoring and observing
container environments for your workloads and applications. And then we're also going
to have a quick refresher on what AWS is offering in the containers as well as AWS Observability
for Containers. Then, Arun is going to dive, specifically, into our latest capabilities
in Container Observability and share wit
h you a journey on how Bob are using these technologies to help him monitor and
observe better, resolve issues, and troubleshoot root causes. We also have resources so
you can take away at the end, start exploring with AWS
Container Observability. With that being said, let's get a start. I hope most of you can relate the feeling of getting that wake up call
at the most inconvenient time. Whether it's at 3:30 in the morning or a Friday before you wanna head off for your long awaited vacation, out
age happens always in
these most inconvenient times. And one of the most
infamous quote says that, "Everything fails at once, and the outage and failures doesn't care about what time it is." Our goal here today is to equip you with sets of tools so when problem happens that you know where exactly to look for, what to look for, and how
to resolve that issue, so ideally, hopefully, you
can get back to sleep at 3:35. That is why monitoring,
and observability too, says are essentially for a company,
especially, for folks like
you who are responsible to picking up that wake up
call and to resolve issues. Now we're adding the
containers into all these mix. It's a daunting feeling that
you are lacking that visibility in the container environments that adding up additional
stress of that wake up call in the most inconvenient time. So this can all get even more hectic. It feels like you are riding
the biggest container ship through a storm in the ocean, and our goal is hopefully
to land you smo
oth sailing with the technologies and
capabilities that we have in AWS for you're gaining more visibility in the container environment. So let's first take a
look at why customers are adopting containers, and why are you migrating
workloads to containers, and building applications
on them containers. There's certainly advantages, and we see a rapid growth in that area where a lot of customers are
using container technologies. Couple of you know straightforward ones are containers are flexible,
t
hey're definitely scalable, so it's much easier for
folks are building workloads and applications on
containers and use as you go. They can rightsize the workloads, that's also another benefit. The reasons why customers are
wanting building applications and workload on container, it also brings some of the challenges. And because containers are quickly be deployed or provisioned, it can also quickly be destroyed, and this type of behavior
is a primary advantage but also a struggle to
keep tracki
ng changes and especially in a complex environment with a high turn rate. You can right-size the workloads, which means that a lot of
times the memories and CPUs are shared resources. It become difficult to monitor these shared resource
consumption on the physical host and also getting very difficult
to have a good indicators on container performance
and the application health. And last but not least, you might have some toolings that might give you
one piece of the puzzle but maybe not entire i
nformation. So a lot of customers might only get a little bit information based on the toolings and
services they're using, but not be able to really correlating these informations and metrics together to get a full picture. So we certainly see a lot of challenges in terms of observing and monitoring in these container environments. So here is a quick refresher on what AWS is offering
in terms of containers. We have a range of
various different choices. One of the most common questions
customer
asking us is that, "Hey, help us decide which
container service we should use." And that's a fair question. And we often recommend
customers working backwards from your application requirements
and operation preference. So do you want a self-managed or do you want a fully managed services? And these are a good
question to ask yourself when you choosing the
right containers to use. So although this is not a session to deep dive in the container options, we have tons of other videos out there that
you can watch if you wanna deep dive into various different container options. But here at a very high
level as a refresher that we have fully managed
container services, which is our EKS and ECS, and both of their services that provide a rate of compute options and also have deep integrations
with other AWS services. It does provide a global
skills and reliabilities when you come to expect. We also have two host options. One, which is EC2 type, where you can manage underlying instances on whic
h your container is running, or you can choose to run your containers in the serverless manner with AWS Fargate. And then last but not least, we also have the Container Registry. That's a fully managed container registry offering the high performance of hosting and you can reliably
deploy application images and artifacts anywhere. So that's a quick refresher
on high level overview on the container what we offer. Now, let's take a look at what
we offer in observability. Amazon CloudWatch is our
A
mazon AWS monitoring solution that integrates with over 120 AWS services and really provide you with
that single source of truth in your observability and integrate it with all
these other AWS services readily available for you. AWS Amazon CloudWatch collects
monitor and operation datas in forms of logs, metrics, and trace, and provide you that unified
view of operation health and then complete visibilities into across multiple AWS
resources applications runs on AWS or on-prem. Some of the feedb
acks that
we hear from customers, why they love using Amazon CloudWatch? Simplicity. Because they're already building workloads applications on AWS, it's really out of the box observability with a lot of preset
dashboards and capabilities already available for customers. So with a minimum effort, customer can up and get their monitoring and observability running on their AWS workloads and applications. Centralized observability. They can now go to one single place that really see their operation
al health, application performance,
and all in one single place. And also end-to-end. Because of these highly
integrated with 120 AWS services, now we can provide these
different correlations between services that really provide you that end-to-end visibilities
into the environment. We also understand that there's
a lot of folks out there are using open source. So we also have open source options with AWS Managed options. So if folks are out using
Prometheus out there or using Grafana, and certa
inly these are
our larger community of our customers and we also have these
options for customers as well. And one of the freaking
question we often get is, I'm already using Prometheus and Grafana, why do I need to use AWS? Here are some of the reasons
why AWS managed services provide that additional capabilities or features for our customers. First, foremost security. We offer a secure way of integrated with AWS
Identity Access Manager for identifications and authorizations to ensure all your
data is
encrypt in transit and at rest. Security is certainly a very big reason why customers are looking for AWS managed versus a self-managed open source. Also AWS managed Prometheus and Grafana also be able to help production workloads. So in the way that we manage
the underlying infrastructure that helps you eliminating over provisioning or under provisioning, and be able to optimize it as your production
workload grows our skills. We also provide high availability across multiple availabili
ty zones and for our managed services. Then the integrations. We have several integrations
with AWS services that including the builder
tools, cloud formations, CDKs, cloud trails for auditing. And last but not least
is our contributing back to the open source communities, and we collaborate with
special interest groups to represent the voice of our customers. So we always are in these
open source community to understand what is
our customers looking for to making sure we have the capabilities t
hat support the customer needs. So that is what we have in AWS
observability and monitoring. Now, I'm going to transition to Arun to talk specifically around
container environments or what are some latest
features and capabilities to help you gain more visibilities in the container environment. All right, Arun, take it away. - Thank you, Lucy, for
such a great introduction. You're always amazing. Now, let's quickly go
through some timelines. In year 2019, AWS introduced
Container Insights, a ful
ly managed native observability
service for containers. As you already know, it's
reliable, it's secure, and it has built-in
analytical capabilities. Year 2020, we introduced Amazon Managed
Service for Prometheus. It's a serverless Prometheus
compatible monitoring service to securely monitor your
container environment at scale. Again, it's fully managed, it's secure, it's highly available. What else do you need? And then 2023 re:Invent, two brand new features or products with Enhanced Container
Insights. It provides additional telemetry from Kubernetes control play components. It also gives you cube state metrics, and a lot more real-time state, and a lot more real time
troubleshooting capabilities. And then we also introduced
CloudWatch Application Signals, and that is a game changer. In one sentence, CloudWatch
Application Signals automatically collects the golden metrics. Again, what are those golden metrics? Availability, volume of requests, latency, and faults and errors, all the
four golden metrics
from your applications enabling you to quickly
see the operational health without writing any custom code. And guess what? You don't even have to
create dashboards for that. Now let's dive deep
into some of the aspects of Amazon CloudWatch Application Signals. First and foremost, it gives
you prebuilt dashboards. Now these dashboards are optimized to present the most
important operation data all in one place, and all of this, without
any manual effort. Then comes the golden
m
etrics, the all four of them. Availability, volume of requests, latency, and faults and error, all of them are automatically discovered out of the box for every
service, for every operation, and for every dependency
of your application, and that is really cool. And then comes service level
objectives, also known as SLOs. Now today, business gauges the
happiness of their customers not by measuring the CPU or the memory of their infrastructure, but by measuring the
service level objectives that ar
e promised to the customers. And with Application Signals, you now have a native way to
monitor and report on SLOs all in CloudWatch. And lastly, but the most important factor is the ability to troubleshoot faster, because of all of these
things coming together, Amazon CloudWatch Application
Signals gives you the ability to arrive at the root cause
of an incident pretty fast in just few clicks. Now, let's dive deep into
a day in the life of Bob. Bob is a CSEC professional. He's an application ow
ner, and very thorough in
everything that he does. As you can rightly see, he owns this retail store where you could buy all
of these luxury watches. He also owes this pet clinic, which is a high volume
traffic application. It's a Friday afternoon, and Bob is in a hurry
to get out of the office because his friends are waiting for him to join at the Top Golf for
an evening full of drinks, some fun, some golf and a lot of food. So Bob wrapped up from his
office and he was logging off and he was on
his way to go to Top Golf. At that very moment, a new DevOps team member
messaged him in the Slack. He said that he is finding it difficult to monitor all the clusters that he manages in one single place and someone told him that Bob
is the right person to help. Bob was in a hurry, but Bob being the Bob, he wanted help. What did Bob do? Bob logged in to AWS Management Console and he went to the CloudWatch. There in the CloudWatch, he clicked on Container Insights and opened up the Container
Ins
ights landing page. Here Bob explained this new
resource on the features that Container Insights
brings to the table. First and foremost, all
the clusters that you have will be listed in the landing page. You would see the health of the clusters based upon the color. Red means there is some
alarm which is triggered. Green means everything
is good, hunky dory. Now this person asked, "Bob, I understand the red and the green. What's the deal with the dark
blue and the light blue?" Bob explained, th
e dark blue, the clusters are operating
at a high utilization and you might wanna monitor them closely. The light blue, the clusters are operating
at normal utilization. Then this person asked, "Hey,
Bob, tell me this one is red, probably something bad,
but why blue? Why not red?" And Bob explained, "Well, AWS is doing
something really cool here. AWS best practice alarms using
the threshold information collected from hundreds
of thousands of clusters gives a high level overview of what's happeni
ng in your cluster even when you have not
configured any manual alarms." "Oh wow, that is cool. Can you
click on that blue one, Bob?" Bob clicked on the blue one, and immediately the person
was able to understand that the blue cluster
might need a closer look though it has no manual alarms in it because it has a 90% memory utilization. In the case of lighter blue, probably there is no problem at all based upon the AWS best practice alarms. Bob also explained that you could get the control plane
summary
in the landing page and scroll down, you will be able to see the top nodes based upon the CPU utilization
and memory utilization, And scroll down, you will be able to see all
the clusters and its health. Then this person asked, "Bob,
this is red. Why is it red?" Then Bob explained, "Well, Container Insights
provides a rich set of metrics, over 50 metrics, to help you understand your Amazon EKS health and performance. And guess what? You can convert
any of them into an alarm." Now let's c
lick on this one, and as you can see there
are 18 alarms configured out of which two are triggered, that's why it says two out of 80. Now if you click on cluster, it takes you to the two alarms
that are actually triggered. As you can rightly see, there is an alarm which is CPU
utilization over pod limit. So this person asked, "What is this alarm? What is the CPU utilization
over pod limit?" Bob answered, "Well, it
is the pod CPU usage total divided by the pod CPU limit, and every time when the p
od CPU limit is coming closer to the specified limit, this alarm will trigger." Then Bob also explained, there are over 50 metrics
and corresponding widgets in the page itself so that you can dive deep into
the health of your clusters. Bob also said, "Well, if you're using distributed GPU, then Container Insights, provides
you enhanced GPU metrics." Now, let me take you to
enhanced GPU metrics, I have it set up in a different region. And what does that mean? Container Insights is
available in se
veral regions. Now let me take you to a different region to dive deep into the
enhanced GPU metrics as well. So as you can rightly see
I'm in a different region, and there the clusters are
all okay, 16 alarms green, and if you scroll down, you will be able to see
pod GPU utilization, memory utilization, memory
used by total, power draw, and even the pod GPU temperature. Isn't that amazing? This new DevOps team member was very happy with what Bob was able
to walk him through. Bob was happy that h
e was able to help. He signed off from his
laptop and he headed or rather he was on his
way to get to Top Golf because he wanted to join his friends for an evening full of fun,
some drinks, and some golf. At that exact moment, Bob received a message from his dog team stating that multiple
customers are reporting latency in the pet clinic application, which is an application that he owns. Bob immediately knew that
that is not a good sign. Why so? Because it affects
the revenue of his company, bec
ause people needs to
register to pet clinic, and that is the revenue
stream for the company. What did Bob do? Bob again went back to Container Insights, and this time he wanted to identify why his web service application is slow. He clicked on the red one,
and he went to the clusters, and there in the cluster, he scrolled all the
way down to the bottom, and then he clicked on
the Application Signals. There in the Application Signals, he was able to see all the services. Again, these services are
automatically discovered by Application Signals for you. And then in Application Signals, he was able to immediately see that there is something wrong, an SLI is unhealthy,
there is a latency issue. What did Bob do? Bob clicked on that service
and opened up a tab. As you can rightly see, it took from Container Insights
to Application Signals, and here you could get all
the four golden metrics, the volume of total
requests, the availability, the latency, and the faults and error, all of them aut
omatically discovered and dashboarded for you. So Bob was going through
the volume of the requests and availability, the availability is a hundred percentage, the volume of requests is
pretty steady, not a big deal. He looked at faults and error,
there's nothing wrong there. And then he could see that the
latency, especially the p99, and of course the p90 and 50, they are all on the higher side. And what did Bob do? Bob click on a point in
time for a p90 latency. And as you can rightly see, Appl
ication Signals now narrowed down and provided you with the traces for that specific point in time, and there are a lot of trace
IDs that were captured. And what did Bob do? Bob clicked on one of the traces. Here, you can see that the requests are coming in from the client all the way back to some S3 and some DB, but there is some red that is happening. What did Bob do? Bob clicked on the red service
that was having latency, Bob scrolled down, and Bob could immediately see that the front end tha
t was
having about 20 seconds but the backend, the web service, was pretty much taking the entire time. What did Bob do? Bob said to the dog team member that the calls made to this, the GET request made to this
IP address at port 8083, you gotta check it. Why is it making that
call? Why is it slow? You might wanna go and check
the IP address and the port and then dive a little deep. Bob also mentioned that. Check out this, there is an
S3 call that is being made, it's a 200, okay, but it is doing
a ListBuckets, which is not a good measure,
which is not a good operation. You should go to the respective S3 bucket and the respective S3 object, if you are having hundreds
of thousands of buckets, a ListBucket make take
forever to return a response. So you might also want to
make a change to the code in such a way that it is going
to that particular object. Bob was happy that he was able to show how Application Signals can give an easy way to troubleshoot a problem. But Bob wanted to show
off
a little bit more. Bob went to Application Signals and clicked on the Service Map, and then Bob explained
him that this is the way to get a unified single
plane of class visibility for your application, because Application Signals discovers this service map for you and then constantly updates it
as your application changes. This was a big information
for the dog team member, he was happy, and Bob was also happy
that he was able to help. Bob was about to sign off from his laptop so that he could
head to the
Top Golf to join his friends for an evening full of fun,
some drinks, and some golf. At that exact moment, a developer, an application developer
reached out to Bob. She's new to the team. She deployed some applications, and she wanted to troubleshoot because she assumed that
there is a performance issue and she didn't know what to do. Someone pointed her that Bob
is the right person to help. She called Bob and said, "Hey Bob, I want to check
some performance logs for the pet clinic
application. I don't know which log to search for, and I don't know how to
do it. Please help me." Now, what did Bob do? Bob being Bob really wanted to help. Bob immediately went
to Container Insights, and this time he clicked on
the red one, which is bad, and then he went to the
container straightaway. As you can see now the dashboard
is focused on containers, the container summary, the
memory utilization, the pods, if there are any waiting,
so on and so forth. The request from the
application
developer was, I want to troubleshoot the
performance logs for the cluster. What did Bob do? Bob scrolled all the way down and he selected that particular container, and under actions, he clicked
on view performance logs. Right now, AWS Container Insights will take you from Container Insights all the way to Amazon
CloudWatch Logs Insights. As you can see, at the landing page, you will be able to see that the particular log is preselected and then there is a query
preselected for you. All you hav
e to do is click on Run Query and it's going to fetch
the data for the timeframe, which was also preselected for you. Bob knew that in 2023 re:Invent, AWS or rather Amazon CloudWatch team released a lot of new features specifically around the
generative AI capability from CloudWatch, and Bob simply wanted to show a couple of tricks to this developer. What did Bob do? Bob took away the query, and then in the query
generator, he typed something. And then what did he do? He clicked on generate new
query. As you can see, a prompt, a natural language
prompt created a query, all you have to do is now Run Query, and it is going to get a
query to get the cluster name, Kubernetes host and all the instance IDs, as you can rightly see
it is getting populated. And you could also do some complex, really complex query
generation using this feature. What did Bob do? Bob took away the existing
query and typed something, and then again clicked
on generate new query. This time it generated a
much more c
omplex query. All you have to do is Run Query, and it is going to actually come up with the data that we requested. In this case, all the nodes arranged with
average CPU in descending order. All you have to do is prompt
it with a natural language and it supports, you know, very simple queries to complex queries as you can rightly see. You could also come here
and export it as Excel or CSV or JSON, so that you could store it and probably use it for other functions, and you could do all of this
fr
om a API perspective as well. Bob was able to help, and Bob was happy that
he was able to help, but he still wanted to show some of the more advanced features that Amazon CloudWatch
delivers for us today. Bob went to the log groups, and showed this person that if you choose to create a new log group, today Amazon CloudWatch
gives you the ability to have a standard log class as well as an infrequent access log class, which gives you benefit
from a cost perspective. So essentially enabling
you to
save some cost but obviously there might
be some feature trade offs, but it is up to you to save cost using infrequent access or using standard. Bob also showed that you could
actually have log anomalies all detected by Amazon
CloudWatch automatically for you, essentially, finding anomalous behavior on the ingested log events in a log group automatically for you. Bob also showed that there is
a feature called Live Tail. All you have to do is select
a particular log group, and if you are familiar
with
the Unix command tail -f, Amazon CloudWatch Live
tail is exactly that. It tails a log group as live can it be, all you have to do is
you can click a start, and it's going to tail that log. You could add filter patterns. For example, you can search
for Arun in the log group or search for Lucy in the log group, and if it happens, it's
going to show you that. All you have to do is select
the log group at click start, and as soon as you click start, it's going to do a tail -f, essentially, on
that particular log group and then you can dive deep into all the different aspects of the log. Bob was happy that he was
not only able to help, but also show a lot of additional features that CloudWatch brings to the table. Bob was about to sign off from his laptop so that he could head to
Top Golf to join his friends for an evening full of fun,
some drinks, and some golf. As he was getting ready to get out, Bob's manager came to the desk. Looking at the grumpy face of his manager, Bob could se
nse that
something was not right. His manager straightaway told
Bob, "Bob, we are in trouble. It's time to renew the license of that third party monitoring tool that we got two years ago. They have a new licensing model and it's going to cost
us an arm and a leg. And also, they said
that they could give us five percentage discount if we sign up for a three
year extended contract. Bob, we really don't want to
spend any more money on this, and obviously, we don't want
to do this three year contrac
t. Do you know if AWS has an option for a centralized monitoring solution, and also that would allow us to monitor our containers
and probably much more?" Bob had a broad smile on his face. Bob immediately said, "Yes, there is something called AWS native cross-account observability solution. Let me take my presentation
and walk you through that." And Bob switched to presentation mode. Bob explained to his manager, cross-account observability CloudWatch lets you monitor and
troubleshoot applicati
ons that span across multiple
accounts within a region using cross-account observability. And again, what does that mean? Bob explained that consider
you're having multiple accounts, say for example account A,
B, C, all the way up to N. What you can do is, you can
have your logs, metrics, prices, container observability,
application insights, and a lot more into one
single observability account. Essentially, giving you a bird's eye view across multiple accounts, giving you reduced meantime to re
solution. And Bob also said that
this is so easy to set up, it's few clicks, you will
be able to set it up. As well you could do this
infrastructure as code, so as new accounts are rendered using your account rendering mechanism, you could make that new AWS
account, let's call it N+1, make it available logs, metrics,
traces, container insights, application insights, and much more, into that central observability account. Bob's manager asked this
very vital question. "Bob, seems like this is what
we want. Finally, we can get away
from that third party tool. But give me the truth,
how much does this cost?" Bob smiled and said,
"Well, this new feature comes with no additional
cost for customers." This cross account
observability with CloudWatch is no additional cost as of today. And again, this is within a region today. It is not cross region,
it's within a region, but it could be across multiple regions because AWS always
listens to its customers. But for now, it's cross
account observab
ility, no additional cost, giving you the seamless
cross account visibility into your logs, metrics, traces, container insights and much more. Bob's manager asked, "This is cool. Can you show me the experience
from a console perspective?" Bob said, "Why not? Let me show you that." This time Bob pulled in his setup, demo setup, and he went to the CloudWatch. There in CloudWatch, he
clicked on Container Insights. There in Container Insights, Bob showed that there
are three clusters here, but as yo
u can rightly see, this one is called as monitoring account all the way at the top right corner. And if you scroll down, you can see that there are
clusters are the same name but you could see that this
one is getting reported from one account called microservices A, another from microservices B, and another from microservices A. And you could actually dive
deep into each of these clusters and troubleshoot all from the
single monitoring account. You also have an option to
troubleshoot a lot more
. For example, if you go to log groups, you could actually see that this log group is reporting from the account
labeled monitoring account, which is this account. But if you scroll down, you'll see that another log group is getting reported from microservices A, which is a totally different account. And again, scroll down, you'll see another one
which is another account, so on and so forth. You could do log insights
and so on and so forth. Several other CloudWatch features also because this fac
ility of
cross-account observability, today brings you the seamless data access and navigation capability
across logs, metrics, traces, and much more. Bob's manager was super happy with what Bob was able to tell him, and at the end of the day, Bob's manager was able to get rid of that third party licensing contract and then move to this
cross account observability from Amazon CloudWatch. Bob had an awesome smile on his face because he was able to do
a lot of things that day, and Bob was happy be
cause
he was getting ready to join his friends at Top Golf. On his way to his friends, you could see that Bob was able to, you know, do a lot of things that day. He was able to explain the
Container Insights experience to a new team member. He was able to troubleshoot
a real world problem, a latency problem in few clicks. Typically, in large enterprises, or rather large or
small, it doesn't matter, it takes hours and a dozen people to troubleshoot a latency issue. Here, Bob was able to do
all of
that by himself, he was able to showcase the latest generative AI
capabilities of CloudWatch to a new team member, and Bob was also able
to explain his manager about cloud native cross
account observability enabling him to save a ton of money that he was spending on
some third party contracts. Now, would you like to help yourself and your customers the same way? That's the biggest question. But believe me, we have got you covered. As you can see, these are some of the resources
that we want to
offer you. The first one is the
Observability Workshop. As you can see it's a workshop,
it's a hands-on workshop. It's free of cost for you,
if AWS hosts that workshop. Please contact your account team and they should be able
to set that up for you. And how does that workshop look like? That workshop looks like this. It is, again, a free of
cost workshop for you, if AWS hosts that for you. Here you will have native
observability across logs, metrics, so on and so forth, and under insights,
under
Container Insights, you'll be able to get a hands-on of everything that we talked today. And under Managed Open
Source Observability, as Lucy was talking about, a managed open source Prometheus service and a Grafana service. And of course the AWS
Distro for Open Telemetry, which is a collector, which can send data to
multiple monitoring tools, all of it's for you. And also if you go to use cases, there is a particular use
case for Application Signals. Again, you will be able to
do hands-on on a
ll of this. We are contributors to this and we continuously build on this. Now, the next one is the
observability best practices. This is prescriptive
guidance and recommendations with implementation examples. And how does it look? It looks like this. This is also built by, you
know, contributors within AWS. This is publicly accessible. This has got a bunch of
guides, this is cloud agnostic, but you will be able to
get hybrid and multi-cloud and a lot of information about how to build an
amazing
observability. We also have observability
maturity model here so that you could dive deep and figure out where do you stand in the observability maturity model. You know, where do you stand? Stage one, two, three, or four? Now if you are a Terraform
shop or a CDK shop, go and check this. This is Observability Accelerator
for Terraform and CDK, essentially a bunch of modules to help you configure observability. And we also have skills
builder here for you. Essentially, you can enroll into this d
igital training course, which contains presentations,
occupational diagrams, service demonstrations, a lot of other resources links so that you could expand
your knowledge base, and also there is an EKS workshop for you. Now, as we wrap up, you know that observability
is the foundational element for establishing a reliable service, but it does not happen overnight. Patience is virtue and there
is no secret sauce here, and there is no compression
algorithm for experience. What that matters is
you
and your experience. And if you would like to connect with us, our LinkedIn handle is here. Please reach out if you have any questions and that's a wrap from my end. Back to you, Lucy. - Thank you, Arun, for a great demo. And Bob certainly know a lot
of capabilities in CloudWatch, and he's able to show that, impress his coworkers and his manager. I think Bob deserve a promotion. And thanks again everybody
for your time for joining us and learn about the latest capabilities. We hope this is valu
able for you and you now take to your day-to-day life to improve your work and troubleshoot and find the root cause of the issue. Thanks, everybody. Have a great day.
Comments