I'm re-reading DeMarco & Lister's Peopleware, first published in 1987, to sort out a nagging problem in my own office. Sometimes it's good to review the classics. It turns out, even though technology has changed the world since then, people haven't changed all that much.
I might have to review Christopher Alexander next.
Researchers at Clemson University, working with 538.org, identified 3 million tweets from 2,800 Twitter handles belonging to Russian trolls:
“We identified five categories of IRA-associated Twitter handles, each with unique patterns of behaviors: Right Troll, Left Troll, News Feed, Hashtag Gamer, and Fearmonger. With the exception of the Fearmonger category, handles were consistent and did not switch between categories.”
The five types:
- Right Troll: These Trump-supporting trolls voiced right-leaning, populist messages, but “rarely broadcast traditionally important Republican themes, such as taxes, abortion, and regulation, but often sent divisive messages about mainstream and moderate Republicans…They routinely denigrated the Democratic Party, e.g. @LeroyLovesUSA, January 20, 2017, “#ThanksObama We're FINALLY evicting Obama. Now Donald Trump will bring back jobs for the lazy ass Obamacare recipients,” the authors wrote.
- Left Troll: These trolls mainly supported Bernie Sanders, derided mainstream Democrats, and focused heavily on racial identity, in addition to sexual and religious identity. The tweets were “clearly trying to divide the Democratic Party and lower voter turnout,” the authors told FiveThirtyEight.
- News Feed: A bit more mysterious, news feed trolls mostly posed as local news aggregators who linked to legitimate news sources. Some, however, “tweeted about global issues, often with a pro-Russia perspective.”
- Hashtag Gamer: Gamer trolls used hashtag games—a popular call/response form of tweeting—to drum up interaction from other users. Some tweets were benign, but many “were overtly political, e.g. @LoraGreeen, July 11, 2015, “#WasteAMillionIn3Words Donate to #Hillary.”
- Fearmonger: These trolls, who were least prevalent in the dataset, spread completely fake news stories, for instance “that salmonella-contaminated turkeys were produced by Koch Foods, a U.S. poultry producer, near the 2015 Thanksgiving holiday.”
Will learning that Russian trolls' "mission was to divide Americans along political and sociocultural lines, and to sow discord within the two major political parties" help people call bullshit on trolling tweets and posts? Probably not. But a guy can dream.
Via Schneier, the head of security for the marketing firm running the game stole the million-dollar game pieces:
[FBI Special Agent Richard] Dent’s investigation had started in 2000, when a mysterious informant called the FBI and claimed that McDonald’s games had been rigged by an insider known as “Uncle Jerry.” The person revealed that “winners” paid Uncle Jerry for stolen game pieces in various ways. The $1 million winners, for example, passed the first $50,000 installment to Uncle Jerry in cash. Sometimes Uncle Jerry would demand cash up front, requiring winners to mortgage their homes to come up with the money. According to the informant, members of one close-knit family in Jacksonville had claimed three $1 million prizes and a Dodge Viper.
When Dent alerted McDonald’s headquarters in Oak Brook, Illinois, executives were deeply concerned. The company’s top lawyers pledged to help the FBI, and faxed Dent a list of past winners. They explained that their game pieces were produced by a Los Angeles company, Simon Marketing, and printed by Dittler Brothers in Oakwood, Georgia, a firm trusted with printing U.S. mail stamps and lotto scratch-offs. The person in charge of the game pieces was Simon’s director of security, Jerry Jacobson.
Dent thought he had found his man. But after installing a wiretap on Jacobson’s phone, he realized that his tip had led to a super-sized conspiracy. Jacobson was the head of a sprawling network of mobsters, psychics, strip-club owners, convicts, drug traffickers, and even a family of Mormons, who had falsely claimed more than $24 million in cash and prizes.
The longish read is worth the time.
The Nielsen-Norman Group has released recent research on user interactions with intelligent assistants like Alexa and Google Home. The results are not great:
Usability testing finds that both voice-only and screen-based intelligent assistants work well only for very limited, simple queries that have fairly simple, short answers. Users have difficulty with anything else.
Our user research found that current intelligent assistants fail on all 6 questions (5 technologies plus integration), resulting in an overall usability level that’s close to useless for even slightly complex interactions. For simple interactions, the devices do meet the bare minimum usability requirements. Even though it goes against the basic premise of human-centered design, users have to train themselves to understand when an intelligent assistant will be useful and when it’s better to avoid using it.
Our ideology has always been that computers should adapt to humans, not the other way around. The promise of AI is exactly one of high adaptability, but we didn’t see that that when observing actual use. In contrast, observing users struggle with the AI interfaces felt like a return to the dark ages of the 1970s: the need to memorize cryptic commands, oppressive modes, confusing content, inflexible interactions — basically an unpleasant user experience.
Are we being unreasonable? Isn’t it true that AI-based user interfaces have made huge progress in recent years? Yes, current AI products are better than many of the AI research systems of past decades. But the requirements for everyday use by average people are dramatically higher than the requirements for a graduate student demo. The demos we saw at academic conferences 20 years ago were impressive and held great promise for AI-based interactions. The current products are better, and yet don’t fulfill the promise.
We're not up to HAL or Her yet, in other words, but we're making progress.
The whole article is worth a read.
I probably won't have time to read all of these things over lunch:
Share that last one with your non-technical friends. It's pretty clever.
Item the first: Bruce Schneier discusses how Russian censors have tried to shut down Telegram, an encrypted communications app:
Russia has been trying to block Telegram since April, when a Moscow court banned it after the company refused to give Russian authorities access to user messages. Telegram, which is widely used in Russia, works on both iPhone and Android, and there are Windows and Mac desktop versions available. The app offers optional end-to-end encryption, meaning that all messages are encrypted on the sender's phone and decrypted on the receiver's phone; no part of the network can eavesdrop on the messages.
Since then, Telegram has been playing cat-and-mouse with the Russian telecom regulator Roskomnadzor by varying the IP address the app uses to communicate. Because Telegram isn't a fixed website, it doesn't need a fixed IP address. Telegram bought tens of thousands of IP addresses and has been quickly rotating through them, staying a step ahead of censors. Cleverly, this tactic is invisible to users. The app never sees the change, or the entire list of IP addresses, and the censor has no clear way to block them all.
A week after the court ban, Roskomnadzor countered with an unprecedented move of its own: blocking 19 million IP addresses, many on Amazon Web Services and Google Cloud. The collateral damage was widespread: The action inadvertently broke many other web services that use those platforms, and Roskomnadzor scaled back after it became clear that its action had affected services critical for Russian business. Even so, the censor is still blocking millions of IP addresses.
Whatever its current frustrations, Russia might well win in the long term. By demonstrating its willingness to suffer the temporary collateral damage of blocking major cloud providers, it prompted cloud providers to block another and more effective anti-censorship tactic, or at least accelerated the process. In April, Google and Amazon banned—and technically blocked—the practice of “domain fronting,” a trick anti-censorship tools use to get around Internet censors by pretending to be other kinds of traffic. Developers would use popular websites as a proxy, routing traffic to their own servers through another website—in this case Google.com—to fool censors into believing the traffic was intended for Google.com. The anonymous web-browsing tool Tor has used domain fronting since 2014. Signal, since 2016. Eliminating the capability is a boon to censors worldwide.
Meanwhile, back in the U.S., a Federal judge has cleared the path for AT&T to purchase Time Warner, which will create one of the largest companies the world has ever seen.
All of this is scary to a lot of people. Which is why charlatans are on the rise once again.
We live in interesting times.
We just got back from the vet. The x-rays show that Parker's leg is almost completely healed, so he's finally cleared to go back to his play group. He has no idea about this right now but tomorrow morning he'll be very, very happy.
Now I'm about to run to my office, so I'm queuing up these articles to read later:
OK. Chugging some tea, and hitting the CTA. More later.
Alexis Madrigal, closer to an X-er than a Millennial, rhapsodizes on how the telephone ring, once imperative, now repulses:
Before ubiquitous caller ID or even *69 (which allowed you to call back the last person who’d called you), if you didn’t get to the phone in time, that was that. You’d have to wait until they called back. And what if the person calling had something really important to tell you or ask you? Missing a phone call was awful. Hurry!
Not picking up the phone would be like someone knocking at your door and you standing behind it not answering. It was, at the very least, rude, and quite possibly sneaky or creepy or something. Besides, as the phone rang, there were always so many questions, so many things to sort out. Who was it? What did they want? Was it for … me?
There are many reasons for the slow erosion of this commons. The most important aspect is structural: There are simply more communication options. Text messaging and its associated multimedia variations are rich and wonderful: words mixed with emoji, Bitmoji, reaction gifs, regular old photos, video, links. Texting is fun, lightly asynchronous, and possible to do with many people simultaneously.
But in the last couple years, there is a more specific reason for eyeing my phone’s ring warily. Perhaps 80 or even 90 percent of the calls coming into my phone are spam of one kind or another. Now, if I hear my phone buzzing from across the room, at first I’m excited if I think it’s a text, but when it keeps going, and I realize it’s a call, I won’t even bother to walk over. My phone only rings one or two times a day, which means that I can go a whole week without a single phone call coming in that I (or Apple’s software) can even identify, let alone want to pick up.
Meanwhile, robocalling continues to surge, with a record 3.4 billion of them sent in April—approximately 40% of all calls placed that month by some reckonings.
Welcome to the 21st century, where your 19th-century technologies do more harm than good.
Not all of this is as depressing as yesterday's batch:
I'm sure there will be more later.
Via Bruce Schneier, interesting research into how to use mouse movements to detect lying:
Cognitive psychologists and neuroscientists have long noted a big "tell" in human behavior: Crafting a lie takes more mental work than telling the truth. So one way to spot lies is to check someone's reaction time.
If they're telling a lie, they'll respond fractionally more slowly than if they're telling the truth. Similarly, if you're asked to elaborate on your lie, you have to think for a second to generate new, additional lies. "You're from Texas, eh? What city? What neighborhood in that city?" You can craft those lies on the fly, but it takes a bit more mental effort, resulting in micro hesitations.
In essence, the scientists wanted to see whether they could detect -- in the mouse movements -- the hesitation of someone concocting a lie.
Turns out ... they could. The truth-tellers moved the mouse quickly and precisely to the true answer. The folks who were lying jiggered around the screen for a bit, in a sort of hemming-and-hawing adaptation of Fitts' Law.
That's kind of cool. And kind of scary.