Main

Hands On With AWS Kinesis

Hey Austin Poole here. Today we are going to cover Kinesis in a nutshell. This is an introductory video on the topic for a AWS service that Github: https://github.com/bpoole6/starting-kinesis Contents of this video ------------------------------------------------------- 00:00 What is Kinesis 01:15 Creating Kinesis data stream using Console 02:00 Explain Kinesis Console 03:07 Download project on Github 03:20 Kinesis Producer Example 04:12 Kinesis Data Viewer in Console 04:30 Kinesis Consumer Example 05:39 Fin

Austin Poole

6 days ago

what is kinesis Kinesis is an AWS service for ingesting and storing streaming data this includes data such as log data from mobile devices website click streams stock prices temperature readings and whatever you fancy let's actually create our first data stream first we'll pick a name let's go with first uncore stream now we need to decide if we are going to go with on demand or provisioned with on demand AWS will scale the throughput for us dependent on traffic with provision capacity we must d
etermine the throughput necessary we'll go with provision for the capacity mode AWS canus smallest unit of throughput is measured in what AWS calls shards and shards contain records more on records later one Shard has a right through put of 1 megabit per second and or 1,000 records per second a read throughput of 2 megabits per second let's review what this table means capacity mode as you saw earlier we are using provisioned capacity where we Define the throughput provisioned shards the number
of shards we requested data retention period the tldr our data will be by default stored and The Shard for a period of 24 hours for free anything past 24 hours will incur a charge server side encryption using KMS we can encrypt our data at rest monitoring enhanced metrics this collects additional Telemetry data tags Tagg in the data stream data stream sharing policy allows you to share streams across AWS accounts let's explore the UI before we take a look at a code example applications give you
various options for using producers and consumers monitoring shows you what is going on with your stream this is a great screen to see if you're running into hard limits configurations allow you to change the configurations such as the capacity mode provision Shard count tags encryption data retention poish period and more advanced monitoring enhanced fan out is for when you want to Fan out your data hopefully this will be covered in a future video data viewer allows you to read data from your s
hards via the UI as you can see we currently have no data in our Shard data stream sharing is for sharing your data stream with different AWS accounts eventbridge pipelines is used to create custom actions when certain events occur with your data stream such as change the the configuration of your data stream now let's look at a code example navigate to this GitHub link or look for the link in the description use git clone or download the project open the project in your favorite IDE and navigat
e to constant. jaava make sure the stream name and region are correct let's go to producer. Java and produce some records in the send events method we first create the kesis client then we create a list for put records we are creating several records so we'll need a for Loop we generate some data and convert it to SDK bytes now we create the actual put record entry records are written to shards and Records consist of three elements a partition key and a data blob and a sequence number we only ne
ed to provide the partition key and the data blob aw us will handle creating the sequence number the next couple of lines pushes the records to The Shard now let's actually run the program success let's go back to the UI under the data viewer Tab and pull the data you can see we have data for our Shard this data will remain here for a period of 24 hours because we left the data retention period at d default now we will use consumer. Java to read from The Shard programmatically and the receive ev
ents method we first describ the stream to get The Shard name next we need to get The Shard iterator to iterate Over The Shard The Shard iterator has five different types at sequence number after sequence number trim Horizon at timestamp latest we will use trim Horizon because that reads everything on The Shard get The Shard iterator from The Shard iterator respon response prepare the get records request with The Shard iterator we call the API here and now we Loop over every record in the respon
se sometimes responses do not have any records in them we then get the next Shard iterator to get more records if they exist now run the program there are our records we can leave the program running and we can generate more records and you will see them get picked up by the consumer well that's Kinesis in a nutshell subscribe if you want to learn more about AWS services like kinesis

Comments