We live in an age where everything we interact with is becoming “smart,” be that our phones, watches or even washing machines. The Internet of Things – where everything around us is tethered to the Internet – is changing our lives and habits in dazzling and frightening ways. This blog series will explore the interesting questions of privacy, the future of computer science and the challenges that lie ahead for our digital society.
The technology industry is the ultimate jungle. And the Internet of Things is the ultimate watering hole.
The tech industry is an ecosystem where there’s a scarcity of food – or in our case consumers – and way too many predators. Companies higher up in the food chain command a greater portion of the jungle ecosystem, able to prey on consumers and their fellow companies. And among all places in the jungle, the Internet of Things is the best of the hunting grounds – the ideal place to find prey.
[Related: A primer on the Internet of Things]
It’s easy to see who the big kahunas are in this analogy: Google, Facebook, Microsoft, Apple – basically, the everyday brand names. These companies are staples in our society, carrying extensive customer bases and gripping large swaths of the market. With the resulting limited breathing room, smaller companies are forced to carve out their own identities, evolving to match the market and assert their place in the jungle.
The price tag on free
In the tech industry, innovation is the name of the game.
Innovation is at the center or the Internet of Things. It’s all about providing services that haven’t yet been explored to their fullest. If companies are to survive, they need something innovative to pull in customers. But in a world where consumers have aversions to all things with price tags – just take a look at students’ stinginess – many companies are forced to provide their services for free, such as Facebook, which provides its social media application for free.
How, then, can these companies gather the finances needed to support their operations?
The answer: data.
It’s quite intuitive: If companies cannot charge you for using their services, the costs have to be offloaded somewhere else. And there are no better customers than advertising agencies, other tech companies and the government – all of which have the money to spare and the interest to back it.
As such, you have companies like Google allowing billions of users to utilize its search engine for free. In return, it sells users’ data to advertising agencies and fellow tech companies. Ever wonder why LinkedIn is constantly pestering you about importing your email contacts? It’s because Google is sharing that information with LinkedIn – and I would go so far as to say it’s doing so in order to make a profit.
The same is true for startups. For example, Polymail, a startup that markets itself as having a pristine, easy-to-use email client, includes features such as tracking whether recipients have read sent emails and a “snooze” feature to make an email disappear from the inbox until a set time. Surprisingly, it provides all these services for free – an incredibly compelling reason to download and use the application. Under the hood, however, it collects all sorts of user data, from IP addresses to your actual location. Although its privacy policy doesn’t explicitly say it, one needn’t venture too far to figure Polymail probably sells this data to cover its finances.
[Related: Plugged In: Big Brother is watching]
It’s apparent companies often cross the line of privacy to gather data on their users. But the reason behind this nuclear arms race for data is almost purely because of the market forces dictating the industry. In order to fund development of innovative products, companies need to monetize whatever they can of the user experience – even if it means jeopardizing people’s privacy.
While this data collection has drastic ramifications for users’ privacy and security, it has also had effects on the field of computer science as a whole.
Data-driven
While data is readily available to companies, what makes that information monetizable is analytics.
In other words, the ones and zeros collected by companies is not what’s profitable; the data interpretation is what brings in the cash.
For example, if I collected consumers’ preferences on a social media application – such as how Facebook keeps track of what sorts of posts you have “liked” – the raw data is not what’s useful to advertising agencies. Rather, it’s only effective if I run analyses and make some conclusions about the kinds of preferences a consumer may have, be that cute, furry creatures or cars.
This resulting need for data analysis and organization has fallen on the shoulders of computer scientists and statisticians – and thus students. While the notion of processing data has always been there in computer science, there is a now a greater push for students to be able to efficiently run computations on large sums of data – big data analysis, in other words. One need only look at the internship applications for successful companies, most of which ask students if they are fluent in data mining and analysis, to see the growing need for students to be able to process the influx of data.
This surge of data analytics has broader implications, however. It demonstrates a push by companies to understand the individual in terms of information – ones and zeros. As a result, our indelible essences are being converted into data and monetized despite our wishes. The human being reduced to a nothing more than a digital cash cow.
But in some ways, this emphasis on the individual is fascinating. It is intriguing to see how technology tries so hard to recreate the human being through an incomplete set of data – heart rate, location, social media habits. Sure, there are apparent privacy concerns and market forces at play, but the fact that the Internet of Things is in some capacity increasing the focus on individuals, as opposed to groups of people, demonstrates the individuality of technology – or the uniqueness of each of us.