Welcome to “Tech Blabby,” a weekly column that filters tech and gaming news through the cultural lens of Flavorwire. Last week, we looked at Nintendo’s slow, growing embrace of freedom in their own video games, and how freedom is essential to a certain type of successful game. This week, it’s the complete opposite, as we take a look at Apple’s new patent for technology that can block iPhones from recording at concerts, and how that plays into the ways in which our interactions with the world have been slyly undermined by the tech industry as a whole.
In March, I went to a Beach House show at the Knockdown Center in Maspeth, Queens. It’s an industrial building, hollowed out and hallowed by the art elite; it feels like a certain Old New York. The concert was appropriate for this environment; it was an arthouse take on the band’s live show, where all of the attendees were asked to sit on bare concrete and keep talking and phone use to a minimum. The space was pitch black but for projections of art directed by the band. It was one of the most serene spaces I’ve ever inhabited in New York City… until we all decided it was more important to pull out our phones and document the show than to actually experience it.
That’s how the story goes. Like an unwanted opening act, frustration with fellow concertgoers is an expected feature at all live shows these days, especially in New York. But, so long as violence isn’t involved, people are allowed to do whatever they want with their phones. Or, at least, they are until Apple delivers on the patent they’ve just been awarded by the Patent and Trademark Office, which will provide performers and venues with a device that kills camera functions — both photos and video — on all iPhones.
Specifically, the patent is for a product that would “generate infrared signals with encoded data” that disables all recording functions on users’ phones. (So, a ray gun shooting at you from the front of a stage, then.) This means no brightly lit screens blocking your view, no fight for the best Instagram post from the show, and, maybe, no digital memories. To a lot of us curmudgeons, this is a good thing. But it’s also just one of the many clear steps that tech developers are taking in order to control the way we experience our world.
Companies are doing this in subtler ways than with ray guns, too: Facebook just unveiled plans to alter the algorithm that determines the content in users’ news feeds, again, so that posts from “friends and family” will be prioritized over those from publishers. The company promoted this change as “Building a Better News Feed,” but what it’s really doing is reclaiming the Facebook platform from the publishers (follow us on Facebook!) that have come to rely on it as a way of serving content to readerships. (Twitter and Instagram also made recent similar changes.) For users, this change most likely seems fine enough, as, if you can believe it, Facebook was initially sold as a way to keep in touch with your friends and family, and many people still use it that way.
But, for those of us who work in media, we know what’s really happening. Facebook is looking to serve its own content to users via its Trending Sidebar so that users will not be tempted to click thru to a site that lives on a non-Facebook domain. Facebook, if it gets its way, will have us all living within its ad-fed browser, a blue-and-white cousin to your gramma’s bright yellow AOL world, only now with lossless video chat and a list of suggested friends that looks eerily similar to your neighborhood’s Scruff grid. In the end, Facebook will be synonymous with the internet.
And for many people, this might be fine. Check email, log onto Facebook, get Facebook’s native news, log off computer: There’s nothing superficially wrong with that, until you stop to think about who is curating this news. (Generally, educated twenty-somethings in New York City.) Regardless of the universality of the Facebook experience, there’s no way that one service can appeal to every worldview or every life — especially if the news delivered by Facebook remains so, hm, minimalistic.
Representation in the newsroom has long been a problem, with legacy publications unable/unwilling to find writers of diverse backgrounds to report on stories that might be undermined by white reporters who treat stories of minorities as curiosities rather than realities. But this is a problem in the tech workforce, too. Think of the people who are programming Google Maps, or the algorithms used to give preference to certain restaurants on Yelp, or the users who have the money to download enough apps to send them trending. Many of our experiences in life— shopping, eating, gaming — are guided by the experiences and preferences of white people with money. (The founders of Instagram, Google, Tumblr, Pinterest, Yelp, Uber, Lyft, Twitter, Imgur, Reddit, and, of course, Facebook, are almost all white dudes, minus a few co-founders.) The New York Times even recently ran a piece titled “Artificial Intelligence’s White Guy Problem” that thoroughly examines the way our technology and its suggestive behaviors have been modeled by, well, white dudes:
Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.
Myopic programming leads to things like black people being detected as gorillas in images and Asian peoples’ eyes being detected as closed in photos (as if “closed eyes” are a thing worthy of technological detection). And so, not only were these services — by Google and Nikon, respectively — shoved onto users as near-requirements bundled with Google phones and Nikon cameras, but they also work to remind the users of the ways in which they might be perceived by a portion of the world: specifically, the white, affluent portion that largely had a place in producing the erroneous technology.
Of course, the people in charge of our technology have fed updates and new products to us since personal technologies became the norm. Before Teri Goldstein was suing Microsoft because of a forced upgrade to Windows 10, Microsoft was forcing upgrades in more covert ways by shutting down support for older versions of Windows. (The company even has a “lifecycle fact sheet.”) Gaming consoles become relics in less than a decade, forcing gamers to drop hundreds of dollars every few years to stay relevant to their hobbies. This goes beyond just upgrading to a new TV, or a new phone: tech, more than most other industries, has always relied on forcing users to abandon old gadgets in favor of new ones.
Now that technology has worked its way into every aspect of our lives, though, these changes are often incremental rather than sweeping. Our phones might not be obsolete, but their operating systems can be changed to alter the way we use them (or too resource-heavy to use, forcing an upgrade); a tweak in an algorithm can rearrange what we see in a Google search; idiotic autocorrects can color the way we see words and people; and now, maybe, the very company that makes our phone could also be telling us when and how we can use it. And, even when tech’s creators aren’t trying to dictate how we use a device or service, their inherent biases are slyly guiding us through life. It’s bad enough that we’ve come to rely on computers to advise us on where to eat, plan our travel routes, or tell us who is in our pictures, but what’s worse is that really, beneath the sing-song voice of Siri, it’s mostly a bunch of white dudes who are whispering in our ears every second of every day, telling us how to best live our lives. Which, when you think about it, isn’t so different from the world removed from technology.