[MUSIC PLAYING] Hello, everybody and
welcome to CS 301 video two. The topic of the day
is going to be privacy. We will be talking
about a couple of things that all are in general
relate to privacy. So I went ahead and
wrote ahead of time the list of topics that
we are going to go over. To a certain extent, some
more in depth than other ones. So the first thing we're
going to talk about is what privacy risks
are and the principles. And then we're going to take
a look at the Constitution's Bill of
Rights and look
at the Fourth Amendment. And the expectation
of privacy that we have based on things like
the Fourth Amendment. And also to look at
surveillance technologies and how those potentially run
into some issues with what the Fourth Amendment says
that we are guaranteed. From there we're going
to look at the business and social sectors, and how
privacy is involved in that. And also government systems. And then we're going to
look a little bit about how we can protect privacy and what
k
ind of technologies out there, the market that has opened
up, and some of the rights and laws regarding that. And finally, we will
end it with a little bit about communication,
which will make more sense once we get there. So without further ado,
let us begin today. And remember that I will try
to be as neutral as possible and make things as clear. This topic is not as
iffy, the privacy topic. I mean, everything
is iffy but it's a little bit nicer so things
should be a little bit smoother for th
is one. So let us begin with analyzing
privacy risks and principles. Let's talk about some of
the key aspects of privacy. And I'm to go ahead and
write them down here. These are the key
aspects of privacy. And it's kind of like what
you expect when you're thinking about what privacy is. At least in the
technological world but also in general, in life. It can be kind of split
into three main categories. And those are freedom
from intrusion. So basically the ability
to be left alone. If you want t
o be-- if you want
to have privacy you want to be in peace, have that ability. Not have somebody looking
over your shoulder every single second. Also control, and
this is where it starts getting more digital,
control of information about oneself. So basically if there is
information related to you, you want to have full and total
control about that information. What is there and
what is not there. And finally, freedom
from surveillance. And this doesn't
necessarily have to be technology related
but
there are some big aspects related to technology about it. Because I mean
surveillance could be basically from being
tracked, followed, or watched. And you could be
tracked and followed even before computers
were invented. You could have like
somebody following your car, tailing you like in the movies. But nowadays, of
course, with technology, there's other ways that they
can track you digitally. Those are basically
the main aspects of what we expect when
I say I want my privacy. If they ask
you what
do you mean by that, then you're probably
talking about one of these. You want to be left
alone, you want to control information
relayed to you, and you want freedom
from surveillance. Which, again, is kind
of being left alone in the same sense of basically
you don't want to be followed, tracked, or basically watched. What are some of the privacy
threats that are out there? Like how do our three
key aspects of privacy, how can they be threatened? How can we, for example,
have that free
dom of intrusion broken
down and other things. And so it turns out
that there's a couple of different categories. First of all, I guess the most
obvious is our own actions, right? So our privacy can be destroyed,
or broken, or violated by our own single actions. If I go out in the
stream live right now and I give you my
social security number just for the heck of it. And say, hey, try to hack me. Well you kind of just gave away
some really private information that you really
shouldn't give away.
But some people do that. There's, I think, one of
the antivirus creators has this information posted. And basically says,
look it's public. That's how much I trust my life. Like-- what do they call it? The protection-- identity theft. How much I trust
my identity theft, that they actually went ahead
and released their information. And that's just one way of
selling their product, right? But anyways, those are
potentially privacy risks regardless of what they say. I mean, no matter how
good thei
r software is, it's better if they hadn't
said anything, right. But, of course, they got
to do the pros and cons. To show trust in their
software, they're releasing that information. And then, you don't
need to do that, but if you were to do
it, technically speaking, you'd be as OK as
he is, in theory. So anyway, our own actions. We can basically create
our own privacy risks. OK now that's not
always the case. Is not always our
fault but it's not to say that there isn't
some fault in our actions
. So if you're not careful
with your information then part of that is your own fault. However not entirely because
there's another side of it. There's also theft
of information. So like I was saying
with identity theft, if somebody comes in
and steals information, whether that's
physically or digitally-- So suppose that they're
breaking into an office and steal some
secret information about a company or
about an individual. Somebody breaks into
your house and steals your passport or
your driver'
s license that's theft of information. And that, of course, is
not your fault at all. I mean sure you
could be like, well I should have kept
my house safer. But then again that doesn't
make it your fault, right? You're still the victim there. So essentially
theft of information is another big risk of
losing your privacy. And there's also the sort
of accidental leaking of information. I would call it the
inadvertent leakage-- inadvertent leakage
of information. That's kind of like a
mixture of yo
ur own actions but unintentional. So like whereas you
could intentionally release information like
your social security number to prove something that
you're not afraid or something. Like I was watching Iron
Man 3 the other day and, in the movie, Iron Man
gives out his address to the villain, the Mandarin. And he's like, this
is where I live. Come get me. And then 10 minutes
later in the movie, a chopper shoots a rocket and
shoots his house and blows it up. And then eventually,
later in the movi
e, he's like, I'm sorry. Maybe I shouldn't have
done that kind of thing. That is sort of like
your own actions. On the other hand, inadvertent
leakage of information is when you really didn't
mean to release information but you may have
accidentally done so. For example, if you went in
and looked at all my lectures on YouTube, I'm
sure that you could put two and two
together and figure out some details about me. And that's not because
I intentionally did it to release
that information but it's j
ust difficult. We are as human beings,
we're social beings that like to interact and talk. And as part of that
talking process, sometimes we say things that
we really shouldn't say. And not necessarily
like we're trying to-- I'm sitting over here saying
something offensive or not, I'm simply saying
there's information that we really shouldn't say
but we might accidentally give away. And maybe because we
don't think at the moment that that piece of
information is critical. Or maybe it's not but c
ompound
with other pieces, that's where you can kind of start some
sort of information leakage. On the other hand, there's also
threats whenever somebody else has your information. If an institution uses
your personal information. So, for example,
the most classic. A website that tracks you
and has your information. If they're using that
information that isn't-- it could be intentional, it
could also be unauthorized. So it could be intentional usage
of that, such as if you agree. Like you create
a
Facebook account, you're giving away to Facebook
some personal information. That's intentional. There's also unauthorized. And that's the more
shady kind of thing. Or potentially also could
be within an institution where you have legitimate
use of personal information there could also be
leaks by insiders. So if you heard a website
Wikileaks or just in general like-- I think there's a movie
called Inside Man. I think it's called Inside Man. Wait. No, there is a movie
called Inside Man but tha
t is not the
movie I'm thinking about. That's about a bank robbery. There's a movie with,
I think, Russell Crowe that talks about a big leak. I forgot the name of it. Maybe I can just go to Russel
Crowe's biography real fast. [TYPING] Russell
Crowe filmography. The Insider. It might be The
Insider actually. Inside Man, that's-- I love that movie, by the way. Inside Man is like one
of my favorite movies but that is not anything
related to this. But it's a good movie,
I recommend it too. But anywa
ys--
[TYPING] The Insider. It's called The
Insider from 1999. I believe this is the one. It's written by Michael Mann-- Directed by Michael
Mann, I'm sorry. Well, actually, kind of
partially written by Michael Mann too. But it's about a whistleblower
in the tobacco industry. Yeah and it's a pretty
good movie actually. I recommend you
watch that actually. It's just a random,
not necessarily related to the course
but-- it's kind of actually related in
a way because there's an ethical choice in
lea
king information. But that's an entirely
different topic to talk about, the ethics
behind leaking information. But anyway, the point is that
he is releasing information that would otherwise be securely
protected but he's doing it-- and in the movie, of course,
he has his own reasons for doing it but the
whole idea of releasing unauthorized information
by an insider is a potential threat to
the privacy of anybody. Because he might
have good intentions but there could
also be people that may have
not so good intentions. When a company or an institution
uses personal information, there's a privacy threat. Whether it is because
unintentionally somebody is potentially
leaking information. Or also on the
intentional side, they might just have no
regards for your privacy and sell the information. So those are a couple
of the risks involved. Here's the thing
about the risks, as new technology gets
created, there's also new potential risks coming up. For example, as we more and more
grow in dat
a and, for example, a government or private
companies have a database. That is data that is potentially
waiting to be stolen. It doesn't mean there's
going to be stolen but I'm saying it's a
higher risk because more information is out there. Furthermore, with the advances
in machine learning and data analytics and data mining and
in general processing big data, we have more and
more sophisticated tools to be able
to do data analysis and get more information
from the data. Which is potentially
mo
re information from what would be
otherwise private. And also surveillance
and analysis. So there's a potential
vulnerability of data that is out there. It's like, the
more money you have money in cash in your house,
the higher amount of money you have, potentially
the higher of a target that you might be for somebody
if they find that they have money in your house. Similarly if a company-- a couple of the companies
that I remember, for example-- a long, long time ago got
hacked was Target, righ
t? You think Target is like-- you buy groceries at Target
or like buy random articles at Target. But, here's the thing,
Target is a big company. And they have a lot of
data on their customers and employees and whatnot. So the more data
they have, the more of a juicy opportunity
for a hacker to come in and steal that data. And so Target might
not realize that-- Well, they clearly
did, I suppose. But that kind of opens
them to be more risky and the data is
vulnerable and they have to make sure it'
s safe. There's always that issue
that as we grow and have more and more data,
it potentially becomes a higher
risk of something happening to that data. Some examples of the amount
of data that we have is-- I think I might have said
this on the previous lecture but Google and
Facebook, they handle in the petabytes of data. They process, I think, like a
petabyte a day or something. It's beyond terabytes
at this point. And so search engines
themselves are collecting if not-- well, I'm pretty sure
it's petabyte but, if not, at least terabytes
of data daily. This data is typically analyzed
to target advertisement or develop new services or do
analysis and things like that. And here's the thing, who
gets to see that data? And why should you care
who gets to see that data? Well, the answer to that
is because maybe you don't want people to see it. We go back to what we
were talking about, the freedom from intrusion. You don't need to give a reason
for people to leave you alone. If you want to
be
left alone, you don't need to-- it's not like
guilty until proven innocent, it's innocent until
proven guilty. You want to be left
alone and, unless there's a reason to not be left alone,
then you want to be left alone. So the same thing
can apply with data. You don't want people to handle
your data or see your data, then you shouldn't
have to defend yourself to not allow that. So that is one of the
reasons why we should care. Now the other counterargument
is like-- well let's say that someb
ody
who's seeing it is actually doing it to
bring to a better service. It's actually going
to benefit you. Should you let
them do it is that? Is that sacrifice per say
that you're willing to make? And the answer is
probably yes, otherwise you wouldn't be using all
the social media websites. But it's a question
to always have ask whenever you are giving
information to somebody else. Is it worth it? What are the pros and
what are the cons? Nowadays, I talked about
before with smartphones, you have
location services for
like Google Maps and things like that. And so that's great when you're
using Google Maps or Apple Maps or whatever to try
to find an address. That is cool because it
helps you to navigate. That's the good
uses of location. But sometimes data
is stored and sent without your knowledge,
the user's knowledge, of the location. And why is that done? As an example, let's say
that you are in California, and you go to Disneyland. You're using Facebook
or Twitter or something and yo
u are actually
near the Disneyland park. Facebook wants to
know where you are because it wants to tell
people that are in the area. They're like, hey,
this guy or this girl is at Disneyland right
now and they are probably going to want to go
get something to eat. They're probably on
holiday, they might-- I mean, they probably do
already but just in case, they might not have
a hotel or they might want to go sightseeing there. Maybe there's a
concert or something. And so they want to
be able to ta
rget those specific information
to give it to the advertisers so that they can give you ads
that actually have a higher chance of you finding them
useful in the sense of finding something that you would
actually want to purchase or acquire or take a service. And so there's the
reason why they send it. The question is is that OK? And there's two answers to it. One side, again, privacy,
freedom from intrusion. Maybe you don't want them to
sell you that information. Then Facebook and come back and
say, well, don't use Facebook. And so now you have a choice. Do you use it and risk
giving away your information? Or do you not use it? Or is there a third choice? And the third
choice will be maybe there's a way that you
can turn that setting off like location setting. And here's the
thing, there might be but they don't have to do it. They can be like, if you don't
want it, don't use the app. They could do that. Once I talk about like GDPR and
that kind of stuff later on, you're going to see th
at some
websites will kind of do that. They'll just be like,
you accept the cookies or we don't even
serve you at all. And some of them will be like,
OK, we'll still serve you. We'll just give
you a random ads. So there's a privacy risk
related to things like location which is something relatively
new since smartphones came out. There's always new
technology and new risk. Essentially, nowadays,
anything we do in the internet is recorded. You can kind of start
thinking about it that way because i
t's frankly the truth. When you're at home on
cable internet or DSL or whatever it is, or if
you're on a smartphone using a carrier's wireless signal
like 5G or whatever, 4G. The fact of the matter is that
the cable company or the phone company can see and thus log. They do log every single
website that you visit. Now if you're using HTTPS,
they can't necessarily see a lot of details about what
you're doing in that website because it's encrypted. But they can definitely see
what websites you're
going on. So if you're going on some shady
website, for legitimate reasons maybe, they don't
know the reasons. They just know you want
to a shady website. What do they do with
that information? Well, it's kind of tricky
because that information is technically yours. And they can analyze
it, they can sell it, they can even give it
away if they wanted. But they have to remove
certain things that are associated to you. So they can give you a client
ID number, for example. And be like, OK, this clie
nt ID
is doing this, this, and this. And then they can sell that. But they can also just
keep a copy and say, John Wick was looking
at websites about-- what would John Wick-- dogs. Websites about dogs, yeah. So they can keep that and
sure bet that they do keep it. Because here's the thing,
we'll talk about it later, but if there's a warrant and
that warrant requests to see your browser
history, you sure bet that the cable company is
going to give that information in a subpoena-- I think it's a s
ubpoena or
something-- to authorities without a doubt. They're not going to protect
you because that's the law. The law says if
they got a warrant, then they got to turn that over. Could they delete it? Could they just
not keep it at all? I'm not sure but
I think they can. I don't think there's a law
that says they have to keep it but I'm not sure. It really depends
probably on state laws as well and even municipal laws. But the point is
that, on average, I would say that it's usually
kept for q
uite a while. Every single website that
you're going is kept by them. In fact, I actually
have it here. I opened the link. [CLICKING] | not even
going to say who wrote this or whatever because it
doesn't really matter. What matters is the topic. Back in May of 2020,
there was a vote-- and it kind of became viral-- the claim was 37 senators
voted for federal agencies to have access to
your internet history without obtaining a warrant. And this became a
big viral thing. And there's some truth
to t
hat but it's not as blatant as that. That didn't really
happen the way that they put it where now
the government can just look at your browser
history which, again, it's kept by the cable
company without a warrant. It's not true. It's a very specific case. Typically they do
need a warrant. The special cases is when
it's involving terrorism and, I believe, it's
only for foreigners. It's not for internal
USA tracking. I had it highlighted but I
scrolled up so I lost it. Here we go. "This data can
only be
obtained for an American after approval from
a federal court, and only if it's related
to counter-terrorism and counterintelligence
investigation. And this authority
is nothing new." Apparently it's a refresher
of the Patriot Act. Now I'm not going to
argue whether it's in the first place a good
thing or a bad thing, OK? That's unnecessary
for this course. At least not even yet. Maybe when talk about
like net neutrality but for now that's
beside the point. But at least in current
US law,
they cannot look at your browser history with
the small exception, I think, of certain terrorist cases. But they still have to
already have a federal case and they have to have an
approval from a federal court. I mean, they could change the
law tomorrow, technically. I don't know if
the Supreme Court will hold up to that change if
they just allowed it or not. But the point is, as
it currently stands, they cannot see it. However that doesn't
mean it's safe. Because, here's the
thing, if the cabl
e company is holding it, as I
said before, leaks can happen. And if those leaks can
happen your information could still go out there and
that has happened before. So that again,
brings up an issue, even if the cable company has
it and even if the law protects you from the government
at least looking at it or then releasing it to others. That doesn't mean you're 100%
safe because, again, leaks can happen and do
happen and then that information gets released. Now I'm not here to
make you paranoid
and make you start using, like,
Tor and those kind of things. I'm just saying that that
data is being recorded, and a lot of people are not
aware of that collection of data. Be aware. Be aware that that is
always a privacy risk that-- it's always on the
forefront of what do we do about this kind of thing. Software is complex at the end
of today and it's not easy. And nobody-- I mean, frankly,
very few people actually read the terms and
conditions of the website when they click "Accept." The fact
of the
matter is most likely you don't even know what
you accepted or agreed upon. So here's the other
side of things and I'm going to actually put
this on because it's important. When you go on a website
and you do something, it's insignificant in
the big scheme of things. However, the problem
is that you're not just going to one website, you're
going to a lot of websites. You're doing a lot of things. You're communicating
with a lot of people. And so this is where
modern technology is kind of
changing
things around privacy. Before there's a database about
what kind of websites you go to or something like that. That's kind of the end of that. And maybe there's
another database about who you talk with,
like a messenger system. Maybe there's another
database of what you buy. But very few times will they
actually put them all together. Here's the thing,
nowadays it's becoming more and more common to
have a collection of data. And when you collect
all these small items, you start to actu
ally see the
big picture again and have a detailed picture. Now I also talked about a lot of
times when they sell your data, they anonymize who
it's coming from. So they'll say,
like client 15867 likes to go to this
website, likes to do this, likes to do that, spends this
much time using the internet, uses the internet at
this time of the day, and it's in this zip code. That's pretty as much as
they'll give probably. Here's the thing though, you
get this from the cable company but then Facebook
has its
own set of information and then Google has their
own set of information. And, here's the thing, when you
collect all those databases, it's becoming much,
much easier and practical to reidentify
actual individuals from the information
due to the quality information and the power of
analytic tools and data search. In short, if I combine
the Google data, the Facebook data, the cable
company, Cox data, and then a couple of others, no longer
is it just client 158674. Now I can actually be
lik
e, oh by the way, this isn't just client. This is actually John Wick. I'm actually putting two
and two together essentially and reidentifying it. That becomes a problem
because the whole idea of making it anonymous was
to avoid that identification. But now, bam, it
is put together. And guess what,
the fact that you put it together
doesn't mean that it's no longer OK because
technically speaking, you're putting two and two. You never got that
information directly, right? So is that OK? Should we
allow that? If they reidentify
somebody, can they do that? Is that OK or has now become a
violation when we put together the tiny bits and pieces
together and is it now a violation of your privacy? So that's one of the big
issues that we're facing today when it comes to
privacy is, again the identification of data-- of people basically,
of individuals. And that's again because machine
learning and data analytics are growing very, very fast. Ultimately information
is public on the website and it'
s available to everybody. So always be careful what
you do and say on a website because, as they say, once
it's on the internet it's there forever. It will outlive you. Frankly data collected
for one purposes typically will find other uses. When you go in and
submit some data for-- like say you make a purchase
on a website for flags. I just saw a flag so
I thought about that. So you're buying flags, right. And the person, the website
that sold you the flags, goes and sells your data to
somebody
else who makes-- let's say you bought
a flag for the US. Maybe they're like, OK
this guy likes US flags. Maybe we can sell him pins
with the US flag on it. And so they sell that
to the pin people. And next thing you
know this information is going all the
way to the point that now you're sort of
making an indirect assumption. OK, this guy likes the US flag. Maybe he's like a very
patriotic guy or something. Now you're making all
these assumptions. And again, they're selling
this information about
you. You should have control
about that, shouldn't you? Maybe you're not. Maybe you just like flags
because you were collecting flags from all over
the world and you happen to be buying
a US that day but maybe you're
not even American. Maybe you live in Mexico. So essentially that
information can get out of hand from you doing
something to being completely something unrelated
that you're basically being connected with. And I know we have all
seen a situation where somebody sends a
link to somet
hing and we're watching something. I don't know-- somebody sends
a meme or something on Discord or Messenger of like, hey. Check out this picture of
a Barbie doll or something. And then, next thing you know,
you're getting Barbie doll ads and you don't like
Barbie dolls at all. They're like completely
unrelated to what you like but because the
algorithm basically saw you looking at a
link about a Barbie doll, now you're getting these dolls. I'm sure we all have had a
situation like that, where w
e're getting unsolicited
things of totally random things that we don't even like. But, if you think back,
you're like, oh yeah. That one time that somebody
sent me a joke link and now I'm getting
all these weird ads. Yeah, I'm sure
we've all seen that. That's another
problem that we have. Again, one thing I have
here is, the government can request sensitive data. And here's the thing,
when there's that-- I read about this
a lot of times, when there's the
reidentification of data, they will typic
ally sell
that to the government. And it's perfectly legal because
again they reidentified it. They didn't get it
directly from one source and so it's technically not
giving away your privacy because they put it
together themselves. And the government will
happily buy it and store it and it will request
sensitive data. And so we here's what
it all boils down to. And I'm going to
write that down. At the end of the day-- you know everybody says it's
every person's responsibility to keep themselves
safe. We can't trust
other people to keep us safe at the end of the day. And that's true, I agree. However, it's not
the entire story. At least when it comes
to data and information because the fact of the matter
is we cannot directly protect information about ourselves. And it's not because we're
not smart or we're not useful. No, it's merely because we have
no way of controlling that. It goes beyond us. When we go on a website
and we use the website, it's out of our
hands at that point. I'm n
ot saying that's
a good or bad thing. But at the end of the
day, what I am saying is, we lose control about
that information. We depend on the businesses
and organizations that we give the data to, or
the information, to protect it. [WRITING] Yes, always keep
yourself safe but I guess don't feel too bad
because at the end of the day, it's not really up to just you. And because of
this last statement kind of comes into
the next part. If we can't keep the
information safe ourselves, who can we get
help from to ensure that businesses and
organizations are actually protecting it? Because while I would
somewhat naively say that it is a
business' best choice to make their customer happy. If a customer wants
their data protected, it is in their best interest
to protect the data, right? I would say that a little bit
naively but I would say it. But here's the thing,
that's not always the case. They might not-- I
mean, even if it is, they might still have
somebody hack in and steal it. Going bac
k to Target, I'm
pretty sure Target did not like the fact they got hacked
and were all over the news and basically made fun of. They didn't like that. That makes them look bad. They lose business
because of that. I'm pretty sure some
people were like, I'm never purchasing from
Target again because I got my identity stolen from them. People can and will
make those choices with perfect, valid reasons. And so, at the end
of the day should we let the government
help us with that? Or how can we do it
because
we can't do it directly. We just don't even have
the power or the time or the ability to
be able to go-- I can't go on and be
like, hello Facebook. I'm here for my annual
audit of my information so that I can look
at it in detail. They're going to give me
something like SQL and-- I happen to know
SQL but if you don't know how to do SQL and
do queries then good luck accessing the database. And so should that be
the government's role to come in and help us
to protect our data? And that's
kind of where
the GDPR and those kind of protections came in. Let's talk a little bit now
about some terminology that's going to be useful to know. I'm going to just
write them down first and then I'll talk
about each of these. We have personal information. We have informed consent. [WRITING] And we have invisible
information gathering. [WRITING] Personal information
is basically any information relating
to an individual person. So any info-- [WRITING] Relating to an individual. Which I'll just
abbreviate like that. Person. Informed consent is the
concept of a user being aware of what information is collected
and how it is being used or will be used in the future. That's basically you knowing
what information you're giving up and to the
why you're giving it up and what's going
to happen to it. [WRITING] You don't have to agree
with it, potentially, but you're aware of it. Which, I suppose,
technically means that if you don't
agree with it you could stop, cease and
desist, using that we
bsite or technology. But let's be honest, a lot
of the times the unfortunate story of things is that
we don't like something but we don't really have a
choice and we have to use it. Finally we have invisible
information gathering. That's a collection of personal
information about a user without the user's knowledge. That's going to be
like informed consent but I would say uninformed
consent, I suppose. Basically this is when somebody
is taking your information and you have no
idea it's happening
. [WRITING] Examples of that,
just go on a website and 5 seconds later you
kind of fit this category. At least if it's not a European
website but even if it is, I'd say it's not very good. It happens more often than not. Let's talk a little bit
more about different things on websites. You've probably heard
the term cookie before. [WRITING] So what is a cookie? A cookie is a file that a
website stores on the visitors computer, essentially. [WRITING] You can even go
and look at them. And there's d
ifferent
kinds of cookies. That's kind of beyond
the scope of this course. You have to take an
internet security class for that, which they do offer. But there's like first party,
third party, and whatnot. So I'll kind of
leave it at that. But there's different categories
depending on what they're for, what they contain, and whatnot. But ultimately,
it's a little file that can be used to
store information so that, allegedly, your
experience can be improved. Very, very common usage
of that is, fo
r example, when you don't want to log
in to the same website. So let's say you're going
to go and check your email. You go on Gmail or Yahoo or
something, and you log in. And then you click that
little check mark that says remember me next time. What happens is, you
download a special file that your computer is then
going to sort of store there and make it accessible. So the next time that you visit
the website, like tomorrow. You go to sleep, you
wake up tomorrow, the website is going to
look i
n your computer. It's going to scan files in
your computer in a specific area where cookies are kept. And if it sees a cookie
related to the website, it's going to look
at the key, it's going to compare
it to a key that is stored in the server of
like Google and be like, OK. This person is John Wick. And then he's like, OK. John Wick said to
remember this computer and he has the right key, so OK. That's fine. We'll log him in
automatically or at least we'll suggest the user
name and just let him
put the password
again or something. Or maybe not ask for
two-factor authentication anymore, that kind of thing. It is convenient
in that setting. Also they can be used to
send request to websites not track you as much. So you can have a
special cookie that can be read by a
website as a sign that you don't want to be
tracked in certain ways. We'll talk about those, I
think, later but if not they'll definitely talk about them in
an information security course. So cookies have a lot of
usages but
essentially it's something that you are
storing in your computer, not for your usage really but
for websites to then check on them again next
time you visit them so that they remember
stuff about you. Again, ideally for
your benefit to be able to log in
automatically or know certain settings of a website. You like dark themes
or light themes like for Google and whatnot,
and those kind of things. However, those are
all the great usages but there's also the not
so great usages of them. And those
are when
cookies are used to track you when you don't want
to be tracked because, guess what, the cookies can keep track
of what websites you visit. And then, when you go
on a specific website, it looks at the history
of websites you've seen and then uses that to get
information about you. And so that will be sort of a
secondary usage of that, which is a use of personal
information for a purpose other than the purpose which
was originally provided for. So use of personal info
for a purpose other
than the original purpose. [WRITING] Finally, we have data
mining which, I don't like the term data mining. It's like big data, it's very
generic and not so great. I think more of a couple
of different things. Data analytics is
probably a better term. But essentially,
what data mining is when you hear it is
researching and analyzing masses of data to find patterns
and develop new, or discover new information, from
existing data that you have. So as you're looking
at a bunch of data-- And this i
s where like when-- Because I hear
all the time like, oh, a data miner went in and
found in the video game files that there's going to be
a new skin coming out. That's not really
data mining, per se, it's more if it's just
like scanning data. Data mining involves
a certain layer of creating new information
from existing information. And I think that's
more of like if you found a pattern
in specific files and then generated
something new, then yes. I would say that's
more data mining. But when yo
u're
just searching data and you find something
that's already been there but you didn't see
it before, I wouldn't say that's truly data mining. But unfortunately, the
term is kind of overly used to the point that it does-- people use it for that
term then I guess it's just part of the normal language. But ultimately, just be aware
that when you hear data mining or data analytics,
they're typically talking about
searching and analyzing a lot of data to find
some sort of pattern or to discover ne
w information. Search, find patterns,
discover information. An example of this
is for example what is known as-- an
example for an example. An example of this is
unsupervised machine learning. A type of machine
learning called unsupervised, like cluster
and clustering and whatnot, is an example of data analytics
or potentially data mining where you feed it to a
machine and it's going to find those patterns for you. That's what the
unsupervised machine learning is going to do. And then using that
, you can,
for example, cluster data and that could be very useful. I guess if you don't
like cluster data is-- [WRITING] Suppose you chart the data. You've got to obviously make
it into two dimensions or-- it could be three, it could
be any dimension actually. But what I'm saying is you have
to dimensionalize the data. And so suppose you
have data like this. And you're told, OK, I would
like you to take that data and categorize or classify it
into one group, two groups, three groups, and four g
roups. Well, one group is
kind of not good because that would be just like
the entire data but two groups, three groups, four, and five. If you're splitting
into two groups, the learning algorithm might
actually make this one group, and then maybe make
this another group. Now it could do
it differently, it could have done it
vertically, but let's just say it does that. For three groups, it might
actually make this one group, and then maybe this
is another group now and maybe this is another grou
p. Four groups, maybe,
it's just going to do something like this, and
then that, and then maybe this, and then that. And for five groups it's
going to get a little tricky. It's probably
going to try and do something like this and
then maybe like that. Probably some of these
in there as well. And this-- Well,
again, I don't know. I would have to actually plot
this and see what it does. The idea is that if you
suppose that this data is about what movies you watch. what I'm trying to do is I'm
tryi
ng to classify is to-- let's say we add a new person. This is a database
of a lot of people and in here we're
going to add you. And this is you right here. You happen to be right there. Let's see that we're
trying to promote a movie and say we want to recommend
you a movie, like Netflix. They want to
recommend you a movie to watch based on your
previous list of movies. So what the unsupervised
machine learning did is try to classify the movies
into two different types of movies that people watch
. So this group of people like
to watch this kind of movie and this group of people like
to watch that kind of movie. So obviously, if
you're classified into this two groups
setting, then you should get advertised any of
the movies in this category that those people like. If you go into the higher
different number of groups, in this case,
you're going to have to do some sort of distance. So you can do Euclidean
distance, for example, and find the closest
group and then just get associated with t
hat. So that would be like a
k-mean algorithm or something. And maybe you get associated
with a group over here. Maybe I'll use a
different color for that. And, of course,
technically speaking, by having more groups, then this
is more carefully aimed at you. However, if you go too far
and you make too many groups, you might get associated
with a wrong group. So ideally I would say that the
best clustering in this one, I would go with either three or
four, maybe three probably. But four with this
data
is kind of nice too because there is a pretty
good splitter right here. Maybe if you get classified
into that group of people, then what they can see
is what kind of movies that group of people
watches and then recommend you those movies. Again, of course,
for this we have to track the movies
that you're watching so there's that privacy,
slight privacy, invasion. But anyway, that's basically the
terminology behind data mining or data analytics or
unsupervised machine learning. A couple mor
e and I'll come back
to that quote that I had there. We also have a
computer matching-- [WRITING] And computer profiling. [WRITING] Computer matching is
combining and comparing information from
different databases. Like I said, not
necessarily the goal being to reidentify
someone but merely to be able to have
better information for a specific user. An example of that could be
like using a social security number to be able to
match certain records with other records. Like the DMV is going
to chec
k the records compared to the IRS. And because you probably
gave the social security to the DMV and
the IRS, then they can use that to link you
together and see whether you have a driver's license or not. That would be an example
of computer matching. So let me just write that down. [WRITING] Is combining
and comparing information from different
sources or databases. So I'll just put
different sources. Computer profiling,
on the other hand, is analyzing data to
determine characteristics of peopl
e most likely to
engage in a certain behavior. It's kind of like what
we're doing up here with the unsupervised learning. Here, what we're trying to
do is basically judge people, or profile them, based on the
data that we have of them. Moving on to the next thing,
we have the opt out and opt in. So these are
basically common forms of providing informed consent. Opt out and opt in, and
there two categories. Opt out is basically
when a person has to request, usually by
checking some kind of box or
something, that our
organization should not use their information. Like when you register on
the website and you're like, do you want to be in like
the mailing list or things like that. And in this case,
typically the check mark is already checked in and you
have to physically uncheck it. That is an example of something
when you're technically opting out even though it
makes it seem like an opt in, but it's really an opting out. Alternatively, they
might not even ask you. They might just be lik
e, well,
you registered for our website so we're just going to assume
that you're OK with giving us your information. And so you have
to take effort out of your time to contact
them and be like, no. I do not want you to take my
private information, please. So that will be an
example of opt out. Opt in, on the other hand, is
the collection of information that can only be used explicitly
if the person permits it. Again, typically with
a checking of a box. This is like, do you
want to opt in for us
es of location history. Like if you download
an app and usually, when you download the app, and
you run it for the first time, and it pops up a little
message that says, would you like to share your
contact information. This, this, and that with
this app, you have yes or no. This is an opt in because you
have the choice to say no. The opt in may not work but
you have that choice, right? On the other hand, opt out
you don't have that choice. So it's more of like you have to
actually figure out wi
th them. [WRITING] All right, so that covers
most of the privacy risks and principles. One more thing to
talk about is what is known as the fair
information principles. What those are
basically are sort of rules that individuals or
companies should follow when handling somebody else's data. I guess technically
your own data too. But it's more for
like when you're handling other people's data. And so there are
seven different things and I guess I'll write
them down because they are pretty relevan
t. But I'll just summarize them. So the first one is
always inform people when you collect their data. Don't be shady. If you're collecting
information, or data, inform them about it. Next, collect only the
data that you need. [WRITING] Sometimes as
developers, it's just easy to check everything
and get all the data and then figure out what we
need and what we don't need. This causes a potential problem
because we are sometimes getting information that
we really didn't need and that's because we
're
lazy at the end of the day. But the fact of the matter
is, we really should only collect what we need and
nothing more because it only opens risks of losing
that data and being looked at as negatively. Because if all we needed
was one piece of information but you just put
five, people are going to start asking questions
at some point and be like, why did you need all of that? And then are they really going
to believe you when you say, oh I didn't know what the
function call was just to get t
his one
piece of information. I just kind of got them all. Yeah, they're not going
to leave you on that. They're going to
think you're probably like stealing information
from them or spying on them. Don't put yourself
in those situations. Next, offer-- and this
is very important, offer a way for
people to opt out. [WRITING] If they don't want the
information collected, then offer a way. Now, is it possible
that by them opting out, you lose functionality in
the app or the program? Absolutely. Is
it also possible
that you can't even use the app afterwards because
that information is extremely critical? Like let's say you have
a navigation program but they don't want to share
their location history. And now the app is like
pretty much useless. That's OK, tell them. Be like, we're opting you out
but, unfortunately, the app is useless at that point
so please uninstall the app or you're uninstall it for them. That's OK, if they choose that,
it is their right to do it. Furthermore, once they'
re
opt-- they're OK with the data, and you have collected the
data, and you have informed them that you collected the
data, only keep the data as long as it's needed. This is really,
really important. And so many other
hacks would have been avoided if people
actually followed this one. If you're collecting somebody's
data, use it as needed. You told them how
they're using it, they're OK with you using
it, but don't open yourself to being hacked and
losing this data. If you only need
it for that
moment, or maybe for a day or two,
then delete it afterwards. It's only a hazard
or a risk for you to keep that data any longer
because if you get hacked, if the data falls
in the wrong hands, it's going to come
back to you when you didn't need to keep it. Now, if you have to keep
that's a different story, but if you didn't need to
keep it, then don't keep it. It's cheaper too, just
get rid of the data. You get more space, I suppose. Next, and this is actually
more important than it reads, is ma
intain the
accuracy of the data. I've been telling you
about the whole Barbie example with getting-- as a customer, if you get
associated with something that you're not even liking but
somebody went on your browser and searched for it and now
you're getting all these ads. It's annoying, right? You don't want that to happen
to your customers or to people that you're accessing data from. You want to make
sure that the data is as accurate as possible. That prevents problems,
arguments, lots of thin
gs. It's good practice. Finally, if you have to hold
data for a long time or even if it's for one
second, you always want to make sure
that you protect the security of the data. So you keep the data safe. Easier said than done otherwise
no hacks would happen. So easier said than done. But yes, protect the
data, keep it safe. Data, if you have a business,
is probably so important that if it gets lost,
you lose your business. So treat it as such. And finally, this one's a little
bit more fancy but
useful, is to develop ways or
policies for responding to law enforcement
that request the data. It's going to happen
at some point. Law enforcement is going to
come in with a legal request to get the data and how are
you going to respond to that. First of all, you have to
follow the law because otherwise you're going to get in trouble. Now, I'm not saying that-- that's a whole ethical question,
whether the law is correct or not. But I'm just saying
that you have to comply with regulations. And
if the law says, here's
a warrant, a subpoena, for this person's
browser history, you can't just be like, no. I'm not giving it, I
don't believe in that law. Because then you're going to
get shut down pretty quickly. That's not the place to fight,
unless you want to fight, I suppose. But normally, you want to have
a system set up to do that. And only give what is requested. You don't need to give anything
else than what is requested. If they request
one piece of data, you want to make sure you j
ust
give them one piece of data because you're not required
to give anything more. And giving anything more
can be also a problem. Those are going to be
basically your fair information principles. I wouldn't say that they're
complete and exhaustive but they'll at least get
you started into a good path of holding data. Moving on to the next
topic on the list is the Fourth Amendment. So let me go ahead and read
you what the Fourth Amendment is after I take a sip of water. [CRINKLES] All right here
we go. The Fourth Amendment
is, of course, part of the Bill of Rights. So the Fourth Amendment
"The right of the people to be secure in
their person, houses, paper, and effects, against
unreasonable searches and seizures, shall
not be violated, and no Warrants shall issue, but
upon probable cause, supported by oath or affirmation,
and particularly describing the place to be
searched, and persons or things to be seized." Now that's a lot of words
but what it boils down to is the Fourth Amendment
sets a
limit on the government's right to search our homes
and businesses and seize documents and
other personal effects. It basically requires
the government to provide probable cause. Probable cause is
a big word, you can Google it, and figure
out what constitutes probable cause or not. The government has no right to
basically walk into my house or walk into my car and just
search it and take things from it either. They can't just
search and seize. There's a very popular
video that everybody
should watch that's
called, I think, Don't Talk to the
Police or something. It's not political
but basically it says that unless you
have a lawyer present, you really should not
be talking to the police because anything, as they
say, anything you say can and will be
used against you. Ultimately, the Fourth
Amendment is the one that basically gives us that right. That if we don't-- if we say no, if the police
comes to my house and says, I want to search your house,
and they don't have a warrant,
you can say no. You can close the door. That's it that's the end of
discussion with the police right there. Whether you choose to do that
or not is a different story. Definitely beyond the scope
of this course but ultimately, you have that choice. And you are protected by
the Constitution of the US, or at least the Bill of Rights. What is relevant
to this course is how that expands, or extends,
into these new technologies. So here are the
main problems that arise, the Fourth
Amendment is talking
-- because founding
fathers and everything-- this is like really
old document. There was no such thing as
phones or internet back then. What they were talking
about, of course, as you see in the
actual writing is that against the
right of the people to be secured in their
persons, houses-- of course, back then they had
houses and they have papers. They're not that old-- and effects. There was no such thing as
computers, laptops, phones, and things like that, databases,
smartphones, even cars. An
d so this raises questions
for the Supreme Court of interpreting, and for
everybody to interpret, how that Fourth Amendment can
extend to new technologies. Here's the reality
of things nowadays, much of our personal information
is no longer safe in our homes. It resides in huge databases
outside of our control, like in the Cloud. You have a Dropbox account,
you have a Google account, [COUGHING] you have
files or photos there that are not stored
in your address, they're stored in the server. So t
he question is, how does
that extend to your property? Is that technically
your property? Is that technically
in your house? Is the Fourth Amendment
protecting that? These are big questions
that have come up in the recent years
in the Supreme Court. New technologies also
allowed the government to potentially search our homes
without even entering them. And they could also search
a person from a distance without our knowledge. There's thermal imaging
systems where you can basically X-ray a wall a
nd see if
there's people inside, and you can see
what they're doing. And imagine that you're
looking with thermals and seeing two
people inside a house and you see what
looks like a slap. So imagine that there's two
people, you have two guys, they're just standing there. And this guy slaps that one
and the cops are outside. That's a very bad
version of a car, looks more like a
spaceship but whatever. And then here's a cop and
they got a little camera, and it's a thermal camera,
and you're inside
a house. They see the camera,
they see what looks like domestic
violence, they break in, they arrest you it. Turns out that you were
trying to film a movie and you were acting. You were filming a YouTube video
for your students or something, I don't know. What the heck, you know. Why is the cop even looking
in there in the first place? I have the right to privacy
and the Fourth Amendment says that no searching
and seizures. Is using a thermal
imaging system to look through walls the same
thing
as searching a home, even though they are not
technically entering it, and they're looking
at it from a distance? Another example of that, in
New York, a couple of years ago, I saw that they were making
portable X-ray scanners so that they could scan for
potential terrorists who could have hidden
weapons or something, under that coat or something. And so they wanted to scan
people to make sure that they didn't have any weapons. Can you do that? Is that OK? Isn't that technically
search and seizu
re? Well, the seizure
part is only if they found a weapon I suppose. But isn't that
searching technically? Even though you don't even
know that's happening. It's not-- you're not
even aware that you're being scanned by these people. Or right now with
the coronavirus. Hotels and other places
have scanners at the doors that check temperature. You're not even aware,
potentially-- maybe, there's probably signs but,
if there weren't, you wouldn't be aware
that they're scanning you for high temperatur
e, fever. Actually, I read actually
that the casinos are also trying to implement that
gun technology for scanning if you are carrying a
weapon or not in a casino. And then they can alert
security to kick you out or at least to
investigate if you're planning something shady. So essentially, are
those things protected by the Fourth Amendment? Are they not part of our home? No such things existed
back then when they created the Bill of Rights. So again the new
technologies make possible these non-
invasive,
because there not technically invasive, but they're
deeply revealing searchers. There's also things
like particle sniffers. This one is kind
of obvious but when you go to the airport
and then they swab you and then they check
for explosives. That you can at least
see them swabbing you but imagine if they
didn't even swab you. They could just have like
a little sniffer thing that could just smell
things and then see if there are explosives in you
and you're not aware of that. Is that OK
? Imaging systems,
location trackers also, all of these things, are we
protected by the government? Should there be restrictions
placed on their usage? When should we permit
specific government agents to use them without
a search warrant? Or should they be prepared
under the Fourth Amendment? And so these are big questions
but some of these Supreme Court cases that cover
this and there's been two primary ones that
have to do more with older technology, which are phones. Phones have been around l
onger
so these questions have come up with phones because
of wiretapping. Can you get wiretapped? And does that require a warrant? Or would you be protected under
the Fourth Amendment and they could require a warrant? Or can the government do
it without asking you? Same thing actually
what we were talking about earlier with the
browser history search. Should that be protected by the
Fourth by the Fourth Amendment? The fact that the outcome that
potentially a government agency could get your brow
ser
history from the cable company should not be protected or not? And so the Supreme Court cases
that have dealt with that are-- I'm going to write it down here
because it's a difficult name. Olmstead versus
United States, 1928. That's almost 100 years ago. And already dealing
with wiretapping, would you look at that. In 1928, Olmstead-- I don't know. Here, I can probably
Google some of it. I know the outcome
of the case but let me see if there's a
little background on what happened with him. O
lmstead versus United States. You can probably
Google it yourself but I want to see what happened. Background information. "Until 1914, the
American judicial system largely followed the precepts
of the English common law." Case details--
"the petitioners--" "The case concerned
several petitioners, including Roy Olmstead, who
challenged their convictions, arguing the use of evidence of
wiretapped private telephone calls amounted to a violation of
the Fourth and Fifth Amendment. The petitioners we
re convicted
for alleged conspiracy to violate the
National Prohibition Act by unlawfully
processing, transporting, and sell alcohol." So basically this guy
got caught bootlegging. And then the way
they were caught was probably that
they got wiretapped so I wouldn't be surprised
if these guys were related to Al Capone era kind of thing. So anyways they
were like, yo, this is a violation of the Fourth. And so the Supreme
Court actually sided with the government
and was like, nah, it's not a viola
tion. They basically were like,
Supreme Court said-- I'll write that
down because this is the actual important outcome
is Supreme Court allowed [WRITING] the use of
wiretaps on telephone lines without a court order. That means that if you basically
just want to be spied upon, that's OK. If somebody, if the FBI
wants to look into you, they can do that, no problem. No need for a court
orders or anything. What they basically
said is they looked at the Fourth Amendment-- and they basically
said that
it only applies to physical
intrusions and only to the search or seizure
of material things, not conversations. [WRITING] So only to physical. [WRITING] Not conversations. So I'll read that again,
I'll let that sink in. The Fourth Amendment only
applies to physical intrusion and only to the search or
seizure of material things, not conversations. That means that the
government can wiretap. That Supreme Court
result would extend to basically the government
being able to pretty much spy on any di
gital
information because it's a conversation too, I suppose. Facebook chat, they're not
technically material things. So before you
freak out and panic about what you said on
the phone last night, it turns out that wasn't
the end of that story. Because in 1967, there was
a second Supreme Court case. However, I mean, I kind of
feel bad for those people that were from between 1920 to
1967 because, essentially, Big Brother was watching when
they were talking on the phone. So, I don't know about you
, but
I would feel kind of paranoid even if I'm not
doing anything just because I'm being listened to. Essentially, it turns out that
in the second court case-- which I'll Google again because
that's actually kind of cool to find out as to the
why it came to be-- In 1967-- somehow I have a
feeling this is going to be related to the Cold
War but who knows-- the Supreme Court actually
reversed its position. It said, never
mind, we were wrong, we apologize kind of thing. And it turned out, it ruled
that the Fourth Amendment does apply to conversations. So hey, let this be a silver
lining, if someday you see a Supreme Court case that
you disagree with, you think that the Supreme Court
made a wrong choice, don't think that's
the end of the story. In the future,
typically not going to happen immediately
because then the judge has got
to change, probably. Otherwise-- I mean, I guess
the same judges could change but it's unlikely, I suppose. But typically, what can happen
is when new judges co
me in and such a similar case arrives,
they can change their minds and they can undo what
the other judges did if they think that
it's not right, if they think that they were
probably not understanding. And here's the
real thing, I don't think that the judges have
bad intentions back in 1928. I think it was more of that the
technology was so new that it was hard to really grasp
what they were doing then and that same thing
is happening today. A lot of the stuff, like the
internet, is pretty new
stuff. We still need that
technology to mature and our understanding
of that technology to mature before we can truly
create laws that are fair and actual interpretations of
things like the Bill of Rights and how they apply to
current technologies. That's just something
that happens with time. It's unfortunate but
outside of the CS community, very few people even
know what Linux is. So I imagine they pass
a law related to Linux. The people, the
judges, would have no-- I mean if it went to
the Su
preme Court, the judges would be like,
what the heck is Linux? Is that like a Windows thing? And they'd be like,
yeah but hackers use it. Oh, OK, then it's bad. They wouldn't know. That happens a lot. So technology has to mature
for better laws to come out. And so, I guess, like I
said, the silver lining is that if you ever see
the Supreme Court rule something that is not something
that is relatively new and they rule against what you
thought was the right thing, do not feel bad. There's always
the
second time around. It's a good thing,
I actually think that's a good part
of our government, that the judicial system
can undo things if they find that to be the case. It's good because
it allows the-- the fact of the matter
is that people change, and governments change, and
people's perception of how the government is
change and it's-- yes, the Bill of Rights is great
and the Constitution is great, but at the end of the
day, it should serve to help better people's lives. It should be, like
anything else, a living document that
is up to interpretation by different people in different
times and places in life. Just a little bit
of an aside but I think that it serves
to put a point here that it turn out
that, in this case, wiretapping became illegal
again unless they had a warrant. And again, here, what they
said was that the court-- it says that the
Fourth Amendment protects people not places. So that's a big,
big difference, OK? Before they said it's not
physical versus our materi
al-- how did they put it? They put it as physical
intrusion of material things. So they weren't thinking of, for
example, something as related to material. Then they interpreted
the Fourth Amendment as saying it's related to
people not places or materials. And because it's
related to people, then technically speaking,
you talking to someone is an extension
of you and it must be protected along with you
for your Fourth Amendment. It's a great thing that they
took that as an interpretation because
that has
effects up to today such as, again with the
browser search history. So basically it said,
to intrude in a place where a reasonable person
has reasonable expectation of privacy requires
a court order. Now, the word reasonable
is kind of up to debate. Let me write this and
then I'll talk about it so I can focus on
writing my letters nicer. [WRITING] Essentially they're
saying to intrude in a place or a location
where a person has a reasonable
expectation of privacy. The word reasonable m
eans
they kind of expect privacy, somebody would
find it reasonable, but then again what you
might find reasonable, I might find unreasonable. I might be like, hey, a C
grade is a reasonable grade for this class. You're like, yeah,
you pass the class. You might be like,
that is not reasonable, I want to get an
A in this class. So my expectations
and your expectations could be very different. And so what I might
find unreasonable, or you might find unreasonable,
I might find reasonable. I might b
e like, this
programming assignment is easy, it's reasonable that you
can do it in one week. And you might be like,
no, that is a lie. This takes me three months
to do, it's unreasonable. So you can see how that word
can, and has probably, created some derivative cases as to
what is reasonable or not. As far as I understand
it, wiretapping laws vary from state to state as
to whether you can record-- not wiretapping,
recording phone calls. For example, the state of Nevada
says that you cannot rec
ord a phone call and use it as
evidence unless both parties agree to it. Other states only require
one party to agree to it. It varies between states. At the end of the
day, the US government is a group of different states
with different governments themselves. It's a union of states. So it varies. I would refer to
your local law to see how the word reasonable
and expectation of privacy is interpreted. Now, because I'm curious, let's
see how that case came out. So let's Google Katz
and the Unite
d States. [TYPING] Let's see, background. "Charles Katz was
a resident of LA, who had long been involved
in sports betting. By the mid-1960s, he had become
probably the preeminent college basketball handicapper
in America. In February 1965, Katz
on several occasions used a public telephone
booth near his apartment on Sunset Boulevard to
provide his gambling handicaps to bookmakers in
Boston and Miami. Unbeknownst to him, the FBI
had begun an investigation on his gambling activities, and
was reco
rding his conversations via a covert listening
device attached to the outside of
the phone booth. After recording a number
of his phone calls, FBI agents arrested
and charged him with eight counts of
knowingly transmitting wagering information over
telephones between the US states, which is
a federal crime." Oh, so they got him because
of interstate trade, probably. And then he appealed
his conviction to the Court of Appeals
in 1966 and then he appealed to
the Supreme Court. And then they made a
decision
on December 18, 1967, a 7-1 decision. So that's a pretty strong
decision, actually. Yeah, that's pretty cool. You can read more
about the opinions of the court and the case
if you go on Wikipedia. If I remember I'll try to-- Yeah I'll put a link to this
on the YouTube description. I'll put that one and
also put our homie that was bootlegging stuff. What was his name? Roy what? Olmstead? Yeah. [TYPING] I'll leave those two and then
you can read them on your own. It's always
interesting,
you know. I like history. Those who do not know history
are doomed to repeat it, right? That was like the marketing
campaign in Call of Duty. One more court case that
came out that sort of extended to start to
talking about, in this case, thermal imaging. So no surprise that I
was talking about that, probably because I was
thinking about this. Is Kyllo versus the US 2001. So in this case, I'm not going
to write too any details, I'll just tell them to you. The Supreme Court ruled that
police can
not use thermal imaging devices to search your
home from the outside without a search warrant. So, you know what I was
saying with the scanner, they can't do that. They cannot do that anymore. That is a violation of
the Fourth Amendment. The court stated that when the
government uses a device that is not in general public
use to explore details of the home that would
previously have been unknowable without physical
intrusion, the surveillance is considered a search. And that, of course, we
saw i
n the Fourth Amendment is a no-no. As we can see here the
right of the persons to be secured in their person-- "The right of the people to be
secure in their person, houses, paper, and effects against
unreasonable searches." There's that word
unreasonable again, which of course is up to
debate a lot of the times when there's a warrant used. Here, I'll see as to the why. This must have been
some investigation on something shady
because why would they use thermal imaging, right? And, by the way, t
his
is a 5-4 decision. This one was a lot
closer of a decision. 2001, most of us were
probably alive then already. Let's see how it happened. Let's see, so this happened
in Florence, Oregon. "The device recorded only heat
being emitted from the home." Oh, I think I've
read about this one. Yeah, yeah, yeah,
yeah this one I know. So what happened was-- let's see-- OK, I'll tell you
the way that I remember it and then I'll read
it and we'll see the difference in what the
internet says and what is n
ot. The way that I read this is that
in Oregon, it's in the winter so typically the houses
have snow on the roof. So if you look at like satellite
imaging or helicopter flying over houses, you will
see a bunch of houses. And these houses were covered
with snow in the winter. Like you would see
snow, snow, snow. Imagine this is snow,
snow, snow, snow, and then you have one house
that had no snow. And then it was
like, question mark? Why doesn't this guy have snow? Why is the house-- why is the sn
ow always melting? What's this guy doing inside
the house that it's melting it? Oh yeah, so I think
the first time I heard about this case was
actually in US government class in high school which
sounds about the-- Yeah, yeah sounds about
the right time too. Yeah, yeah, yeah. That's probably what's happening
recently when I did that. Yeah, yeah. So anyways, this here, the
cops were like, this is weird. Why is this guy
warming up the house so much that snow
is always melting? They started scannin
g with a
machine to see what was inside and they saw that it was a
lot of heat being emitted. And so they basically
made an assumption that to grow marijuana
indoors, one needs to provide a large amount
of lights for the plants to create photosynthesis. And so they used
that information to get a search warrant and,
when they did the search warrant, they actually found
out that this guy was growing over 100 marijuana plants. Basically this guy
was growing weed and it was so hot in the house,
that
it was literally melting the snow and basically attracted
the attention of the cops. That's actually kind of funny. That actually went all the
way to the Supreme Court and they basically
ruled that was not enough to have done the
search, basically, I believe. The fact that they did
thermal imaging of the house constituted as a search. If the cops had only
been like, hey, the house does not look like this-- the snow is melting. The fact that there's heat
coming out of the house is going to be en
ough
to get a warrant. If they convince a judge
to actually give them a warrant just on
that information, that would have been OK and
that guy would be in jail. He would be because in 2001
I'm pretty sure that weed was probably illegal then. Essentially, because they use
the thermal imaging system, now that's unreasonable
search, which means that any evidence
they found as a cause of it is not admissible
in court probably and they'll probably
let this guy go. That's pretty funny and pretty
cool
at the end of the day. So reading here, let's see if
there's anything else here. So it turns out that
they were using-- "According to the
District Court that presided over the
evidentiary hearing, the device could not
penetrate walls or windows to reveal conversation
of human beings. The device recorded only heat
being emitted from the home. And they showed that there
was an unusual amount of heat radiating from the roof and
side walls of the garage compared with the
rest of the house." So it wa
s only
looking at the garage. And then they made
the assumption that it was probably weed. So interesting. And "Kyllo first tried to
suppress the evidence obtained from the terminal
imaging search, but then he pleaded
guilty-- pleaded to a conditional guilty." Yeah that's pretty cool. Yeah, I feel proud
that I knew that. I guess I did remember
my US history. So that's like a more
modern usage of technology and what constitutes
as a search or not. As you can see, technology--
the founding fathers
had no idea that we would
be able to do that, be able to look into a
house from the outside, unless we used
magic or something. But because we can
use it, these are things that people have to
now interpret and figure out how do we use our current
laws and current rates or things in Bill of Rights
and how do we extend that to technological devices. And so this is one
example of that. I'm pretty sure that we're going
to get more cases like this. I'm pretty sure we have
cases like this just didn't
find some but-- at the end of the
day, at least when it comes to the
Fourth Amendment, the important
thing to understand is the plain view doctrine. because when there's
a search and seizure, like they have a search warrant
and they go into your house, they have to specify
what they're looking for and what they intend to take. So let's say that they break
into the house to search for-- let's say that they're
looking for an illegal weapon in your house. So they break into your house,
SWAT team a
nd everything, and they look for the gun. They don't find any guns but,
as they're looking around, they found some drugs. Bam! They got you for the drugs. The question is, were
the drugs on a countertop or were they
underneath the ground? And would that be OK? Were those found in the process
of looking for the guns? Now the police
could say, well, we were trying to see if there
was guns underneath the tiles of the house or something. But, at the same time, that's
because they have a warrant. Let
's say that a cop
walks into your house, you let him in, just
because you let him in. And he happens to
look, with his eyes, and see that on the
back table there's a powder, a white powder. That will be in plain view of
the police and, at that point, he can be like, OK, I
have reasonable cause believe that is drugs. So now I'm actually
going to go and approach that, and potentially take
that as evidence and now I have more power because
he has it in plain view. That's kind of a clean
cut example
of that. An example, you get pulled over. You get pulled
over, you don't have to allow the cops
to search your car. However, if the cop looks
into to the backseat because you have a window
and it's not polarizing and he can see
inside and he sees what looks like drugs in
there, that's in plain view. At that point, he
can then be like, hey there's drugs in there,
I have reasonable cause. I can go in the car and actually
check if those are real drugs or not, and they can arrest you. On the other
hand,
if you had drugs-- which you really shouldn't. But if you had drugs and you had
it in a dashboard compartment, then that's not in plain view. The cop can't just
go into your car and open that, unless
you consent to search, which you don't have to. Unless they have a
warrant, they can't do it. And if they do it,
it's a violation, then you have the
same with the weed guy where you can be let off
essentially because it was an unconstitutional search. Essentially plain view works
like that for
old school cases but how does that extend to
something like digital stuff? Let's talk about computers
and smart phone files. What will be considered
in plain view? Is it OK if the cops come to
search your house for the gun, and they see a laptop. The fact that the laptop
itself is in plain view, does that mean that they
can go into your computer and scan every single
file in your computer? Would that be interpreted
under the plain view doctrine? And I'll post a link on YouTube
here for a Wikipe
dia article on more about
plain view doctrine but does that constitute
into plain view? How do we interpret what is
plain view and what is not? There's no right
answer to that, there's a lot of court cases on that,
but I recommend you and welcome you to look at that. But that's pretty much all I'll
say about that because this is getting pretty long already. Moving on, there's other topics
that we want to talk about. We already talked
about the big ones, which are privacy and
the Fourth Amendment
. Let's talk a little bit about
surveillance technologies and face recognition. So of course security cameras
they provide security, they provide a little bit of
safety, I suppose in the sense that the criminal knows
that they're being recorded but they decrease privacy. So a lot of the times,
you have security cameras, you might have them
looking outwards but you might not have
them inside your house because you don't want to
be potentially recorded. And things like that. So that's some of the
surveillance technologies that are out there
and how they apply with expectation of privacy. Moving on into the
business and social sector. We kind of talked
about it already, with marketing and
personalization, we have targeted
ads, data mining. We have informed consent, we
have do not track buttons, we have companies
selling your data, so other companies paying
for consumer information. Social networks, we go
in and post pictures, statuses, and things like that. And then a social network
might
create a new service with unexpected privacy
settings that potentially do things you
don't want and it's more of an opt out versus
opt in kind of thing. My question to you
is, have you ever had a situation where you had
posted information to the web that you later wanted to remove? And why did you remove it? Were there consequences for
posting the information? Have you seen
information about others that they have posted that you
would not reveal what yourself? And if you removed
it, do you real
ly believe that when you
deleted it it's truly gone? It's not in some
backup server somewhere that is going to come
back to haunt you? That's the kind of
questions that we have to think about
when we have data online because it turns
out that, again when you put something online it's
pretty much there forever. I already talked about
location tracking so I can move on from that. Just in case you're
not aware when I say GPS that means
global positioning system. It's a service with
satellites that
allows you to know
pretty accurately now, with the newest GPS, I think
it's within three feet. I think three feet
or three meters, I can't remember,
of your location at any particular time. And there's pros
and cons to that, we talked about those already. One pro is, for example, you
can use it to navigate stuff. One con is, of
course, you're being tracked to sell ads,
location based ads. You're driving
around and there's an ad for a restaurant near
you, that kind of thing. Other good things cou
ld
be like if a parent might want to track their children
through GPS in the cell phone or an RFID. That could be good or bad. I don't know, depending if you
ask the kid or ask the parent, I suppose. Talking about
rights of privacy, I do want to spend a little bit
of time talking about the right to be forgotten. [WRITING] So what is the right
to be forgotten? The right to be forgotten
is basically the right to have material removed. Of course that material
related to you. And this has become a b
ig
thing with the European Union, the GDPR, and everything. But first let's talk
about positive rights and negative rights
for a second again. So is this a positive
right or a negative right? What do you guys think? On the one side,
the negative right, it could be said that it's
a liberty because if you're just strictly talking about
a person being a person, they don't have to remember you. They have to actually make
an effort to remember you, so in a way you could kind of
argue that it's negati
ve right. You could also argue that
it's a positive right because it takes an effort on
somebody else to forget you. It might actually be
harder to forget somebody. If one of my students
just told me, I want you to forget me
forever, that's kind of hard. I have no way of going
in my memory brain and deleting you from brain. That's not a power I have. Of course, what it really
applies to the nowadays is to data and information. So if a company has your
information and data, the right to have
that
information removed so that you're forgotten
from the website for privacy reasons. That's more of what
we're talking about. From there I had a link
on rights of privacy that I wanted to share. So I'll just post a picture
so, the question being is right to privacy a right of its
own or is a derivative right from something else? The argument is that
the right to privacy is actually not a
right of its own. It's actually a derivative
of other things like property rights or rights of
a person or rig
hts to not be caused distressed
or to have agreements kept. If you think about
the right to privacy there isn't an
explicit law that says you have a right to privacy. However, you have
property rights. We talked about the Fourth
Amendment and everything. You have property rights. One would argue that
right to privacy is a subset of property rights. You'd also argue that
it's a right of a person. I just want to point that out
that there's some arguments that there is no such
thing as a right to p
rivacy but it's there, but it's a
derivative of other rights. Just wanted to point that out. Last thing that I have here
written is government systems. What do I want to
say about that? Public records. Yeah, there is such a
thing as public records. There's records available
to general public of things like bankruptcy, property
records, arrest records, salary records for
government employees. I mean you can go online and
find out my salary technically because I'm a
government employee. I don't th
ink it's very
accurate but you can find it. I don't know why it's
not accurate actually, probably because the way
they split it but yeah. Essentially identity
theft can arise when public records
are accessed especially when they're combined together. How do we control access to
sensitive public records? Because, here's the
thing, public records are public for
transparency purposes. My salary is public
to the taxpayer because they're paying for it. I'm a government employee,
I'm a state employee.
And it turns out
that my salary-- yes some of it is
coming from your tuiton but I think part of it
is coming from state. A considerable part,
actually, that's why I'm getting furloughed
with the coronavirus, because all the state employees
are getting furloughed. Essentially the
taxpayer has a right to know how much money
I'm getting because they can be like, yo,
he's getting too much or he's getting too little. Hopefully it's
the latter, right? The problem is, that
puts a burden on me that som
ebody could steal my
identity and be like, oh yeah. My name is x and x and
I work here and here and this is how much I make. And you can also probably Google
my name and find where I live, and then you can
Google to see if I have any sort of criminal
or arrest records, which I don't, fortunately. But if I did you could
probably find that and you can find so much
information about me that then you're going
to impersonate me and basically that's
what identity theft is. What do we do then? Do we hi
de it? If we hide it, then people
are going to be like, well you're not transparent. You're probably
making $2 billion and you don't want to tell us. That's why you're not
posting your public records. Here is the big argument
of access versus privacy. Everybody's property
records are public, it's very hard to hide that. Which means that,
if you own a house, flat out, you're not on
a mortgage or anything, then you can find
out where you live. And there's websites that
dedicate their business mode
l to getting all this
public information and compiling it
together and then using it to do searches on people. And I'm sure you've
seen them, I'm not going to mention
any of them because I don't want to promote them. I think they're wrong and
unethical, but they do that. Especially for people
that have arrest records. I've known people that have
had on the arrest record and their names are destroyed
because the moment you Google somebody with an arrest
record, these companies make an effort of b
eing a top search
result. And when they are, if somebody Googles you, and
the first thing you get back is this guy was arrested
for whatever it is. They have a mug shot
of you, because that's public information too,
that's messed up, right? Now, yes, fair enough
he got arrested or she got arrested they
shouldn't have done something bad in the first place. But here's the bad
part about this, an arrest does not necessarily
imply a conviction. Somebody can be
arrested and then it turns out that the
y are
innocent but, guess what, that arrest record stuff is
still going to go on there. In fact, that's what
happened to this person. They got arrested, turned out
that they came out innocent, but guess what, the
arrest put a mugshot on the internet of them. And they were innocent but are
they going to have a chance to explain that to people that
unethically Google them before a job when they
really shouldn't? No and so they're not going to
get this job because of that. And here's the worst
part
, a lot of those-- I don't want to this to become
a rant but I'll just say, the worst part is those
websites actually charge you money to remove your
information from there. Your own information. So your information
of you, your mugshot, they charge money to
remove you from it. That is getting on to-- I forgot the name of the crime-- not rake but racket--
racketeering-- I think, it's a racket. A racket essentially. They're basically
extorting people for money to delete information
that is public
. And the reason that
they're doing it is because they know it's
going to happen to you. They know that you don't want
that sensitive information to be the first
result on Google. And they don't care if you were
innocent but you got arrested and it's going to be there. It's wrong, it's unethical,
but it's out there and it's technically legal. That's the worst part,
it's perfectly legal. I'll leave you with that
to make your own judgment of whether that's
right or wrong. [TYPING] There's a sectio
n on
national ID systems, I'm not really going
to mention much about that other than Social
Security numbers are great but they are weak because
they're easily falsified. The alternative would
be to have an ID system, everybody has like
a specific card. Kind of like a driver's
license but a state card, which is kind of like
a passport in a way, I know places like Russia have
an internal passport, that's pretty much how it works. The problem with that is
that it's great for security, you only hav
e to carry one
card, and it's more secure than like a social
security number, which has no photo associated
to it and things like that. But the con of that is that
it has the increased potential for abuse in the
sense of violating your freedom of privacy. That's just one
thing to mention. And finally, let's talk a little
bit about the technology market for privacy. [WRITING] A privacy
market, I suppose. Could be a good name,
good catch name for that. So where there's
a demand, there's going to b
e a
supply, eventually. That's like the
capitalism and everything. If people have problems
with identity theft, then there's going to be
companies that come out that help you with that. And so there's a
lot of companies that are helping people
with identity theft, I'm sure you've
heard of all of them. Furthermore, encryption
is a big thing nowadays because of
public [INAUDIBLE] and things like that to
protect data and companies that develop this, governments
to develop this. There's business
too
ls for policies in protecting data, securing
data, and things like that. In fact, there was
an encryption-- there was a government at
some point for exporting strong encryption
software in the 90s but then it got
removed in the 2000s. That's kind of weird. Why would you want that? It's encryption,
if you do it right, it doesn't matter if
you have the algorithm, you still can't crack it. That's all I really
wanted to say, that there's a market out there
And, at the end of the day, you should take
away that
it's your responsibility to make sure your
privacy is maintained. But, at the same
time, even though it's your responsibility,
you sometimes don't have the full
control over that. Like I said, once a
business as your data, you're at the mercy of them. You can do your best effort to
get them to get rid of the data but if they don't get rid of
it, then you're kind of stuck. And that's where the
government could potentially come in and help
or other things. So always be very
careful when
you are giving data away because-- have the expectation
that they're not going to let it go easily. Always be very, very careful. I suppose I should
mention, since I've been talking about it all
day, the European GDPR stuff. So it turns out
that the EU, it's a little bit more
strict than the US when it comes to
the privacy stuff. They have the data
privacy directive, which prohibits the transfer
of personal information to countries outside
of the EU that don't have a system
for privacy protecti
on. It's called the
Safe Harbor plan. And there's still abuses,
it puts a requirement on businesses outside of
the EU, which don't really have to follow the EU because
they're not part of the EU. So it's not perfect,
it only works if the entire world uses it. I'll leave it at that, you can
make your own interpretation of that. Other things I have
here in the notes is, for example, there was the
Electronic Communication Privacy Act of 1986
called the ECPA. That extended the
1968 wiretapping laws
to include electronic
communication, which basically boils down to restricting
access from the government-- restricting government
access to your email. So that's a big, big thing. This happened in
1986 and that's what guarantees that the
government is not going to be looking at my Gmail
address, emails and everything. I'm pretty sure
that because it's called electronic communication,
that extends also to anything like text messaging, messaging
in general like Facebook messaging, or any sort of
electronic communication essentially, Skype, whatever. It's a very, very, very
important thing that happened and a very, very big act. There's also the
Communication Assistance for Law Enforecement
Act called the CALEA, and this was passed in 1994. And that requires--
this was controversial. This one requires the
telecommunication equipment to be designed to ensure that
the government can intercept telephone calls. so there's literally
a law that requires phone companies to have a way
of interce
pting phone calls. However, with a court
order or authorization. So they're still
saying they're going to follow the Fourth Amendment
but think about it here, they are saying, yes,
government is great and secure and keeps you safe. They're going to follow
the Fourth Amendment and not break the law. But if we do have
a search warrant, we want to make sure
that we can actually go in and get into
your phone and actually listen to your calls. The problem with this
is that the same sort of hole that
telecommunication
equipment can have for a government
to listen in, can be opened for less
nicer people that could also use the same exploits. There's a case some years ago-- or at least it feels like
it was some years ago, where the US government
told Apple, we want a back-- what do they call
it, a back entry? A back-- back access? Shoot. I forgot the name. Backdoor. Backdoor, yes, we
want to backdoor. So backdoor-- A backdoor in
programming, or hacking, is basically an exploit that
allows you
to bypass security on the device. Typically back doors are-- there's a difference between
exploiting and backdoor in the sense that the backdoor
is kind of already there and it's kind of intentional,
I suppose, by the people who created the device. So what the US government
was saying, hey Apple. Your phone is encrypted,
this is not good. In fact, I believe it was
the Boston bombers or-- there was one-- No, no, no. It was a shooting in
California, and a couple were using Apple phones
and they wa
nted access to those Apple phones because-- I mean even though-- I don't know if the guys
died in the shooting or if they were
caught or whatever. I mean, they were either
killed or caught and arrested and I don't remember that. But the point is that the
government was like, Apple, we need access. We have a warrant. We literally have a
warrant from a judge that says we have the full
government support to get into these phones and get
evidence on these people because they're mass
shooters, basica
lly. Apple said, sorry, we
don't have such a thing. And the government is
like, no, no, no, no, no. You need to put in a backdoor
to every single phone in the United States Apple phone
so that, if we have a warrant, we are able to do it. And then Apple was-- I think Apple at the time
put a stance of saying, no, we're not going to do this. And it became a big deal. I actually don't remember
what the conclusion of that is, I'll leave it to you
to Google that on your own but it did bring up an impo
rtant
thing because a lot of people saw it and were like,
hey, wait a second. If the US government asked
Apple to put a backdoor and they put it in and
it's supposed to be secret, but what if like
a hacker finds out and then uses that to
steal your information? The hackers are not
going to get a warrant, they're just going to hack
in and get the information. So what do we do? Do we put it in the
backdoor and then actually risk non US
government intervention here? Or do we put a backdoor, hope
th
at the only people that use it are the US government
and then go with that? Big ethical question
here, no answer. I will let you-- if we hadn't had a
discussion last one, I would have probably made
that a good discussion one. But I'll let you just think
about that on your own time. At the end of the
day, the government, they say that there's only one
agency in the government that listens to you. The NSA is the only
agency that listens. It's the only part of
the government that listens because th
e
NSA is basically going to be your
intelligence surveillance agency for inside the
US versus the CIA, which is outside the US
as far as I understand it. And so the NSA is-- I had to bring it up in privacy
because it's a big thing. I'm not going to talk
about it because that's too controversial but I'll just
leave you with one key word that you can Google
and find out more. [WRITING] This word. [CLICK]
Comments