176 stories
·
5 followers

Signal

jwz
2 Comments
Drew DeVault: I don't trust Signal:

I expect a tool which claims to be secure to actually be secure. I don't view "but that makes it harder for the average person" as an acceptable excuse. If Edward Snowden and Bruce Schneier are going to spout the virtues of the app, I expect it to actually be secure when it matters - when vulnerable people using it to encrypt sensitive communications are targeted by smart and powerful adversaries.

Making promises about security without explaining the tradeoffs you made in order to appeal to the average user is unethical. Tradeoffs are necessary - but self-serving tradeoffs are not, and it's your responsibility to clearly explain the drawbacks and advantages of the tradeoffs you make. If you make broad and inaccurate statements about your communications product being "secure", then when the political prisoners who believed you are being tortured and hanged, it's on you. The stakes are serious. Let me explain why I don't think Signal takes them seriously. [...]

Truly secure systems do not require you to trust the service provider. This is the point of end-to-end encryption. But we have to trust that Moxie is running the server software he says he is. We have to trust that he isn't writing down a list of people we've talked to, when, and how often. We have to trust not only that Moxie is trustworthy, but given that Open Whisper Systems is based in San Francisco we have to trust that he hasn't received a national security letter, too (by the way, Signal doesn't have a warrant canary). Moxie can tell us he doesn't store these things, but he could. Truly secure systems don't require trust. [...]

And here comes the truly despicable bit:

Moxie forbids you from distributing branded builds of the Signal app, and if you rebrand he forbids you from using the official Open Whisper servers. Because his servers don't federate, that means that users of Signal forks cannot talk to Signal users. This is a truly genius move. No fork of Signal to date has ever gained any traction, and never will, because you can't talk to any Signal users with them. In fact, there are no third-party applications which can interact with Signal users in any way. Moxie can write as many blog posts which appeal to wispy ideals and "moving ecosystems" as he wants, but those are all really convenient excuses for an argument which allows him to design systems which serve his own interests.

No doubt these are non-trivial problems to solve. But I have personally been involved in open source projects which have collectively solved similarly difficult problems a thousand times over with a combined budget on the order of tens of thousands of dollars.

What were you going to do with that 50 million dollars again?

It is clear from its design and behavior that Signal's priority is to be a social network first and an encryption tool second. Growth at any cost.

Last year I gave Signal a try and it immediately spammed all of my contacts with my non-public phone number. So I was already aware that Signal is sketchy as fuck.

But abusing Trademark law to circumvent the checks and balances that open source development normally provides is just appalling. They get to pretend that it is open source, get the bullet item on the pitch sheet, get the good press associated with that, while still maintaining absolute control. It's no less a vertically-integrated, untrustworthy data silo than any product from Facebook or Google.

Previously, previously, previously, previously, previously, previously, previously, previously, previously.

Read the whole story
aranth
28 days ago
reply
Seeking good cross-platform alternative.
mkalus
27 days ago
If you can live without a desktop app: Threema.
Share this story
Delete
1 public comment
satadru
25 days ago
reply
Shit. Signal is spamming all of my contacts who install Signal, right?

What ever happened with keybase.io?
New York, NY
freeAgent
25 days ago
I don't know about the "spam" part. Signal does notify you if people (phone numbers) in your contact list have joined and vice versa, but it's not what I'd call spam. I think this person is being a little hyperbolic about that aspect of Signal's behavior, at least. This is how (I believe) Signal handles contact discovery and what it is doing with your contact list access: https://signal.org/blog/contact-discovery/ and https://signal.org/blog/private-contact-discovery/ Or maybe not...I'm not sure as I have not looked at the code, etc. to see what they actually are doing.
satadru
25 days ago
Spam is a little harsh, but you get little notifications in the Android app which are indistinguishable from interpersonal Signal messages. This forces you to open Signal to see if someone new has actually joined Signal or just sent you a message. That counts as social-network-spam in my book.
reconbot
16 days ago
I had no idea they had a chat

Hanging Up on Mobile in the Name of Security

1 Share

An entrepreneur and virtual currency investor is suing AT&T for $224 million, claiming the wireless provider was negligent when it failed to prevent thieves from hijacking his mobile account and stealing millions of dollars in cryptocurrencies. Increasingly frequent, high-profile attacks like these are prompting some experts to say the surest way to safeguard one’s online accounts may be to disconnect them from the mobile providers entirely.

The claims come in a lawsuit filed this week in Los Angeles on behalf of Michael Terpin, who co-founded the first angel investor group for bitcoin enthusiasts in 2013. Terpin alleges that crooks stole almost $24 million worth of cryptocurrency after fraudulently executing a “SIM swap” on his mobile phone account at AT&T in early 2018.

A SIM card is the tiny, removable chip in a mobile device that allows it to connect to the provider’s network. Customers can legitimately request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.

But SIM swaps are frequently abused by scam artists who trick mobile providers into tying a target’s service to a new SIM card and mobile phone that the attackers control. Unauthorized SIM swaps often are perpetrated by fraudsters who have already stolen or phished a target’s password, as many banks and online services rely on text messages to send users a one-time code that needs to be entered in addition to a password for online authentication.

Terpin alleges that on January 7, 2018, someone requested an unauthorized SIM swap on his AT&T account, causing his phone to go dead and sending all incoming texts and phone calls to a device the attackers controlled. Armed with that access, the intruders were able to reset credentials tied to his cryptocurrency accounts and siphon nearly $24 million worth of digital currencies.

According to Terpin, this was the second time in six months someone had hacked his AT&T number. On June 11, 2017, Terpin’s phone went dead. He soon learned his AT&T password had been changed remotely after 11 attempts in AT&T stores had failed. At the time, AT&T suggested Terpin take advantage of the company’s “extra security” feature — a customer-specified six-digit PIN which is required before any account changes can be made.

Terpin claims an investigation by AT&T into the 2018 breach found that an employee at an AT&T store in Norwich, Conn. somehow executed the SIM swap on his account without having to enter his “extra security” PIN, and that AT&T knew or should have known that employees could bypass its customer security measures.

Terpin is suing AT&T for his $24 million worth of cryptocurrencies, plus $200 million in punitive damages. A copy of his complaint is here (PDF).

AT&T declined to comment on specific claims in the lawsuit, saying only in a statement that, “We dispute these allegations and look forward to presenting our case in court.”

AN ‘IDENTITY CRISIS’?

Mobile phone companies are a major weak point in authentication because so many companies have now built their entire procedure for authenticating customers on a process that involves sending a one-time code to the customer via SMS or automated phone call.

In some cases, thieves executing SIM swaps have already phished or otherwise stolen a target’s bank or email password. But many major social media platforms — such as Instagramallow users to reset their passwords using nothing more than text-based (SMS) authentication, meaning thieves can hijack those accounts just by having control over the target’s mobile phone number.

Allison Nixon is director of security research at Flashpoint, a security company in New York City that has been closely tracking the murky underworld of communities that teach people how to hijack phone numbers assigned to customer accounts at all of the major mobile providers.

Nixon calls the current SIM-jacking craze “a major identity crisis” for cybersecurity on multiple levels.

“Phone numbers were never originally intended as an identity document, they were designed as a way to contact people,” Nixon said. “But because of all these other companies are building in security measures, a phone number has become an identity document.”

In essence, mobile phone companies have become “critical infrastructure” for security precisely because so much is riding on who controls a given mobile number. At the same time, so little is needed to undo weak security controls put in place to prevent abuse.

“The infrastructure wasn’t designed to withstand the kind of attacks happening now,” Nixon said. “The protocols need to be changed, and there are probably laws affecting the telecom companies that need to be reviewed in light of how these companies have evolved.”

Unfortunately, with the major mobile providers so closely tied to your security, there is no way you can remove the most vulnerable chunks of this infrastructure — the mobile store employees who can be paid or otherwise bamboozled into helping these attacks succeed.

No way, that is, unless you completely disconnect your mobile phone number from any sort of SMS-based authentication you currently use, and replace it with Internet-based telephone services that do not offer “helpful” customer support — such as Google Voice.

Google Voice lets users choose a phone number that gets tied to their Google account, and any calls or messages to that number will be forwarded to your mobile number. But unlike phone numbers issued by the major mobile providers, Google Voice numbers can’t be stolen unless someone also hacks your Google password — in which case you likely have much bigger problems.

With Google Voice, there is no customer service person who can be conned over the phone into helping out. There is no retail-store employee who will sell access to your SIM information for a paltry $80 payday. In this view of security, customer service becomes a customer disservice.

Mind you, this isn’t my advice. The above statement summarizes the arguments allegedly made by one of the most accomplished SIM swap thieves in the game today. On July 12, 2018, police in California arrested Joel Ortiz, a 20-year-old college student from Boston who’s accused of using SIM swaps to steal more than $5 million in cryptocurrencies from 40 victims.

Ortiz allegedly had help from a number of unnamed accomplices who collectively targeted high-profile and wealthy people in the cryptocurrency space. In one of three brazen attacks at a bitcoin conference this year, Ortiz allegedly used his SIM swapping skills to steal more than $1.5 million from a cryptocurrency entrepreneur, including nearly $1 million the victim had crowdfunded.

A July 2018 posting from the “OG” Instagram account “0”, allegedly an account hijacked by Joel Ortiz (pictured holding an armload of Dom Perignon champagne).

Ortiz reportedly was a core member of OGUsers[dot]com, a forum that’s grown wildly popular among criminals engaging in SIM swaps to steal cryptocurrency and hijack high-value social media accounts. OG is short for “original gangster,” and it refers to a type of “street cred” for possession of social media account names that are relatively short (between one and six characters). On ogusers[dot]com, Ortiz allegedly picked the username “j”. Short usernames are considered more valuable because they confer on the account holder the appearance of an early adopter on most social networks.

Discussions on the Ogusers forum indicate Ortiz allegedly is the current occupant of perhaps the most OG username on Twitter — an account represented by the number zero “0”. The alias displayed on that twitter profile is “j0”. He also apparently controls the Instagram account by the same number, as well as the Instagram account “t”, which lists its alias as “Joel.”

Shown below is a cached snippet from an Ogusers forum posting by “j” (allegedly Ortiz), advising people to remove their mobile phone number from all important multi-factor authentication options, and to replace it with something like Google Voice.

Ogusers SIM swapper “j” advises forum members on how not to become victims of SIM swapping. Click to enlarge.

WHAT CAN YOU DO?

All four major wireless carriers — AT&T, Sprint, T-Mobile and Verizon — let customers add security against SIM swaps and related schemes by setting a PIN that needs to be provided over the phone or in person at a store before account changes should be made. But these security features can be bypassed by incompetent or corrupt mobile store employees.

Mobile store employees who can be bought or tricked into conducting SIM swaps are known as “plugs” in the Ogusers community, and without them SIM swapping schemes become much more difficult.

Last week, KrebsOnSecurity broke the news that police in Florida had arrested a 25-year-old man who’s accused of being part of a group of at least nine individuals who routinely conducted fraudulent SIM swaps on high-value targets. Investigators in that case say they have surveillance logs that show the group discussed working directly with mobile store employees to complete the phone number heists.

In May I wrote about a 27-year-old Boston man who had his three-letter Instagram account name stolen after thieves hijacked his number at T-Mobile. Much like Mr. Terpin, the victim in that case had already taken T-Mobile’s advice and placed a PIN on his account that was supposed to prevent the transfer of his mobile number. T-Mobile ultimately acknowledged that the heist had been carried out by a rogue T-Mobile store employee.

So consider establishing a Google Voice account if you don’t already have one. In setting up a new number, Google requires you to provide a number capable of receiving text messages. Once your Google Voice number is linked to your mobile, the device at the mobile number you gave to Google should notify you instantly if anyone calls or messages the Google number (this assumes your phone has a Wi-Fi or mobile connection to the Internet).

After you’ve done that, take stock of every major account you can think of, replacing your mobile phone number with your Google Voice number in every case it is listed in your profile.

Here’s where it gets tricky. If you’re all-in for taking the anti-SIM-hacking advice allegedly offered by Mr. Ortiz, once you’ve changed all of your multi-factor authentication options from your mobile number to your Google Voice number, you then have to remove that mobile number you supplied to Google from your Google Voice account. After that, you can still manage calls/messages to and from your Google Voice number using the Google Voice mobile app.

And notice what else Ortiz advises in the screen shot above to secure one’s Gmail and other Google accounts: Using a physical security key (where possible) to replace passwords. This post from a few weeks back explains what security keys are, how they can help harden your security posture, and how to use them. If Google’s own internal security processes count for anything, the company recently told this author that none of its 85,000 employees had been successfully phished for their work credentials since January 2017, when Google began requiring all employees to use physical security keys in place of one-time passwords sent to a mobile device.

Standard disclaimer: If the only two-factor authentication offered by a company you use is based on sending a one-time code via SMS or automated phone call, this is still better than relying on simply a password alone. But one-time codes generated by a mobile phone app such as Authy or Google Authenticator are more secure than SMS-based options because they are not directly vulnerable to SIM-swapping attacks.

The web site twofactorauth.org breaks down online service providers by the types of secondary authentication offered (SMS, call, app-based one-time codes, security keys). Take a moment soon to review this important resource and harden your security posture wherever possible.

Read the whole story
aranth
32 days ago
reply
Share this story
Delete

DEA Asks for Help Laundering Money

2 Comments and 3 Shares

As Justin Rohrlich reports this week for the Daily Beast, the Drug Enforcement Administration recently expressed a concern that currency it seizes in drug busts could be covered in deadly chemicals, and has asked potential vendors for information about helping it clean up the dangerous bills.

There is good reason to believe that this is ridiculous.

In a Request for Information posted on June 14, the DEA said it was “interested in learning more about available capability in cleaning and decontaminating currency tainted with drugs and other unknown substances.” Some of these substances, it explained, “may be extremely harmful to human health and potentially result in death,” which can also be extremely harmful to human health. “As such,” the DEA continued, “the currency must be decontaminated to ensure safety.” It invited interested vendors to respond by June 26.

There is a lot wrong with this, even beyond the glaring misuse of the phrase “as such” to mean “therefore” and to refer to “substances” in one sentence but “currency” in the other. I mean, that is certainly appalling and something we need to address, it’s just not the biggest problem here.

Here are two bigger ones.

First, according to multiple sources quoted in the article, while one could not describe currency in circulation as “clean,” and drug residue of some kind is not rare, there seems to be little if any evidence that the levels involved could be harmful in any way, much less deadly. I say this only partly because the “hazardous substances” listed by the DEA for potential “decontamination” include marijuana/THC. (I’m not a doctor, but I haven’t exactly seen any headlines about emergency rooms being choked with cases of marijuana poisoning, and I live in San Francisco.) But what did the actual experts quoted in the article say? Here’s a summary:

  • Former FBI special agent for 22 years: this is all news to me.
  • Forensic toxicologist: “absurd at best,” also “ludicrous.”
  • Med-school professor: “quite odd, given the lack of scientific support.”
  • Former detective: no … but maybe for fentanyl?
  • Forensic toxicologist again: no, not fentanyl either, unless maybe you eat the bills.

The question therefore seems to be: Are DEA agents or administrators eating any of the currency they seize?

And this brings us to the second problem: if they are eating it, how would we know? Because the most interesting thing in the RFI is the DEA’s statement that, because the seized drug money is so dangerous that they will not be able to count it before turning it over to the vendor for cleaning:

Contaminated Currency Packaging Requirements and Delivery. The vendor shall indicate to DEA how contaminated currency should be packaged. DEA will not count the contaminated currency (due to inherent safety issues) prior to packaging the contaminated currency, but will have a general indication of the amount that has been packaged for the vendor. The vendor shall also indicate whether they provide pick up services for DEA, if DEA should deliver the contaminated currency, or both. It is preferred that DEA have a service where the contaminated currency can be double-bagged and provided directly to the vendor….

Emphasis added.

***

“Hi, guys, Steve over at DEA again. Hey, so we got another truckload or so of contaminated currency here that we need to ship over for you to laun— to decontaminate.”

“Whoops! You almost said it, Steve!”

“No, I said ‘decontaminate.’ Like in the proposal. Anyway, we’ve got, like, a truckload of hundreds here. What do you think?”

“Will Friday work?”

“Yes. Oh, and don’t forget—we need to get a truckload back, too.”

“Oh, absolutely. You will get a truckload back.” <is making air quotes with fingers>

“Okay … You’re not making air quotes, are you?”

“No.”

“Okay. Because we talked about that.”

“Absolutely. Oh, and Steve?”

“What?”

“Don’t forget to double-bag the cash. You know, so some of the bags don’t break and spill the money out all over the road, never to be seen again.”

“Very funny. You guys are a real hoot.”

“Hey, it’s a good joke.” <is making air quotes again> “Okay, we’ll try to make some room in the hundreds bin.”

“Okay, thanks.”

***

The phrase you’re looking for, I think, is “what could go wrong?” We have a federal agency pursuing a “war on drugs” that is basically pointless to begin with; an agency that (like many others) has a record of seizing assets before any conviction has taken place and without any discernible connection to law enforcement (see Report: Many DEA Cash Seizures Have ‘No Discernible Connection’ to Law Enforcement” (Apr. 6, 2017); and here it is saying it’s going to ship money back and forth for “decontamination,” on a questionable basis, without even counting it.

What could go wrong?

See also DEA Agent: If You Legalize Pot, Rabbits Will Get High” (May 4, 2015) (discussing another really stupid argument a DEA agent made once).

Read the whole story
aranth
89 days ago
reply
🤔
Share this story
Delete
1 public comment
acdha
87 days ago
reply
The better question is which administration buddy this contract will be steered to. The Trump Org is too obvious but has anyone checked Erik Prince’s business filings recently?
Washington, DC

When I asked about Rage 2’s worst character, I got an unexpected response

1 Share

Collector’s Edition celebrates Rage’s most regrettable tendencies

If I had to name a favorite game of E3 2018 — I’m fickle and bad with favorites — I’d probably say Rage 2. I wrote yesterday that it plays like a mixtape of Bethesda’s portfolio, grafting some of the best bits from Doom, Quake, Wolfenstein and Elder Scrolls onto an open world first-person shooter. Unfortunately, Rage 2 retains the one thing I despised about its predecessor, something I worried would prevent me from really enjoying the sequel.

In 2018, the only thing I remember with any clarity about the original Rage is its tone-deaf depiction of heroes and villains. The good guys were blessed with impossibly perfect skin and preternatural good looks. The villainous foot soldiers were mutants, many with facial wounds that looked an awful lot like my own birth defect: a full cleft lip and palate.

Cleft lips and palates (among other birth defects) have a history of representing villainy, one I’ve had to navigate my entire life. But I hadn’t appreciated the anxiety it caused me until I spent a couple dozen hours shooting ghouls who looked as if they’d been traced off my baby photos — pictures of me before I had the dozen-plus surgeries that pieced my mouth and nose together into what’s culturally established to be a “normal” look.

I’d heard rumors about Rage 2 a couple months ago, that it was being made in collaboration with one of my favorite developers, Avalanche Studios. And I was disappointed, though not surprised, when the trailer revealed that the project, while being something largely new, would retain the same imagery with regard to its mutants and heroes. I was downright crushed when Bethesda revealed the Collector’s Edition statue: a bust of Ruckus the Crusher, a mutated goon with an absent upper lip and deformed nose.

As a journalist, you don’t want to make yourself part of the story. But with a little extra time left in my interview with id Software studio director Tim Willits, I asked why the cleft lip and palate imagery made the cut from Rage to Rage 2. To his credit, he didn’t spin his response. Here’s the transcript.


Chris Plante: I have one other thing. I enjoyed Rage 1, but one thing ended up turning me off to it. I was born with a cleft lip and cleft palate, and one of the frustrating things about that game is that many of the enemies have that imagery — and there’s still a little of that in Rage 2. And I’m curious —

Tim Willits: So you feel that it’s a little insensitive?

Plante: Yeah. It makes me a little uncomfortable when it’s always the bad guys that have the upper lip and nose removed, effectively.

Willits: You know, I never really thought of that. I mean, you know, we try to make — you know, Kenneth Scott was our art director on Rage 1, and yeah, I mean I kind of feel bad now. Sometimes it’s hard when you — you don’t live in that world, so you’re like, ‘Oh, these guys …’ So I apologize. And you know, yeah, I’ll talk to the guys.

Plante: Sure. Are mutations normal for the heroes, too, in this version of the game?

Willits: It’s mostly the bad guys. But we do have some — the heroes in Rage 2 are not as pretty as the heroes in Rage 1. Someone did, like, “the girls of Rage” posters and stuff, so we are trying to be a little more balanced. And the Avalanche guys have been very good about being a little more sensitive. So I do think we have a better balance.


Is it a disappointment to hear that some of Rage 2’s villains will be modeled to share my birth defect? Yes, absolutely. Is it a relief to hear someone simply say sorry? More than I could have imagined, to be frank.

I can’t remember a time somebody did this in an interview: just recognized the error and apologized. It made me emotional, tapping into some psychological payload I won’t detonate in this piece. But it also felt like I suddenly could be excited about this thing I liked, some of its baggage left on the side of the road.

I recognize I have the rare opportunity to actually speak to creators in person, that there isn’t a better means for other people outside my position to have this experience. And I recognize that people of other backgrounds have for decades had to play games that treat them as targets — and that they still do. But for a moment, I felt a surge of optimism. If developers can be open, if they can make efforts to find other voices rather than wait for those voices to come to them, then everyone could feel welcome to play the hero, rather than be forced to spot themselves as the villain.

After all, this is a game set in an apocalyptic wasteland. I don’t expect the villains to be pristine beauty models. I know they’ll be grotesque, deformed and mutated. I just hope that in the future the heroes can look like me, too. Maybe that can be a new feature in Rage 3.

Read the whole story
aranth
96 days ago
reply
Share this story
Delete

Today in Uber Autonomous Murderbot News

jwz
1 Comment and 3 Shares
The Uber executives who put this software on the public roadways need to be in jail. They disabled safety features because they made testing harder. They disabled safety features because they made the ride rougher.

NTSB: Uber's sensors worked; its software utterly failed in fatal crash:

The National Transportation Safety Board has released its preliminary report on the fatal March crash of an Uber self-driving car in Tempe, Arizona. It paints a damning picture of Uber's self-driving technology.

The report confirms that the sensors on the vehicle worked as expected, spotting pedestrian Elaine Herzberg about six seconds prior to impact, which should have given it enough time to stop given the car's 43mph speed.

The problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

Things got worse from there.

At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

Deadly Accident Likely Caused By Software Set to Ignore Objects On Road:

The car's sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber's software decided it didn't need to react right away. That's a result of how the software was tuned. Like other autonomous vehicle systems, Uber's software has the ability to ignore "false positives," or objects in its path that wouldn't actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company's system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn't react fast enough, one of these people said.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Read the whole story
aranth
116 days ago
reply
Share this story
Delete
1 public comment
satadru
114 days ago
reply
And this is why Uber shut down its autonomous vehicle project in AZ.
New York, NY

Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site

1 Comment

LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.

On May 10, The New York Times broke the news that a different cell phone location tracking company called Securus Technologies had been selling or giving away location data on customers of virtually any major mobile network provider to a sheriff’s office in Mississippi County, Mo.

On May 15, ZDnet.com ran a piece saying that Securus was getting its data through an intermediary — Carlsbad, CA-based LocationSmart.

Wednesday afternoon Motherboard published another bombshell: A hacker had broken into the servers of Securus and stolen 2,800 usernames, email addresses, phone numbers and hashed passwords of authorized Securus users. Most of the stolen credentials reportedly belonged to law enforcement officers across the country — stretching from 2011 up to this year.

Several hours before the Motherboard story went live, KrebsOnSecurity heard from Robert Xiao, a security researcher at Carnegie Mellon University who’d read the coverage of Securus and LocationSmart and had been poking around a demo tool that LocationSmart makes available on its Web site for potential customers to try out its mobile location technology.

LocationSmart’s demo is a free service that allows anyone to see the approximate location of their own mobile phone, just by entering their name, email address and phone number into a form on the site. LocationSmart then texts the phone number supplied by the user and requests permission to ping that device’s nearest cellular network tower.

Once that consent is obtained, LocationSmart texts the subscriber their approximate longitude and latitude, plotting the coordinates on a Google Street View map. [It also potentially collects and stores a great deal of technical data about your mobile device. For example, according to their privacy policy that information “may include, but is not limited to, device latitude/longitude, accuracy, heading, speed, and altitude, cell tower, Wi-Fi access point, or IP address information”].

But according to Xiao, a PhD candidate at CMU’s Human-Computer Interaction Institute, this same service failed to perform basic checks to prevent anonymous and unauthorized queries. Translation: Anyone with a modicum of knowledge about how Web sites work could abuse the LocationSmart demo site to figure out how to conduct mobile number location lookups at will, all without ever having to supply a password or other credentials.

“I stumbled upon this almost by accident, and it wasn’t terribly hard to do,” Xiao said. “This is something anyone could discover with minimal effort. And the gist of it is I can track most peoples’ cell phone without their consent.”

Xiao said his tests showed he could reliably query LocationSmart’s service to ping the cell phone tower closest to a subscriber’s mobile device. Xiao said he checked the mobile number of a friend several times over a few minutes while that friend was moving. By pinging the friend’s mobile network multiple times over several minutes, he was then able to plug the coordinates into Google Maps and track the friend’s directional movement.

“This is really creepy stuff,” Xiao said, adding that he’d also successfully tested the vulnerable service against one Telus Mobility mobile customer in Canada who volunteered to be found.

Before LocationSmart’s demo was taken offline today, KrebsOnSecurity pinged five different trusted sources, all of whom gave consent to have Xiao determine the whereabouts of their cell phones. Xiao was able to determine within a few seconds of querying the public LocationSmart service the near-exact location of the mobile phone belonging to all five of my sources.

LocationSmart’s demo page.

One of those sources said the longitude and latitude returned by Xiao’s queries came within 100 yards of their then-current location. Another source said the location found by the researcher was 1.5 miles away from his current location. The remaining three sources said the location returned for their phones was between approximately 1/5 to 1/3 of a mile at the time.

Reached for comment via phone, LocationSmart Founder and CEO Mario Proietti said the company was investigating.

“We don’t give away data,” Proietti said. “We make it available for legitimate and authorized purposes. It’s based on legitimate and authorized use of location data that only takes place on consent. We take privacy seriously and we’ll review all facts and look into them.”

LocationSmart’s home page features the corporate logos of all four the major wireless providers, as well as companies like Google, Neustar, ThreatMetrix, and U.S. Cellular. The company says its technologies help businesses keep track of remote employees and corporate assets, and that it helps mobile advertisers and marketers serve consumers with “geo-relevant promotions.”

LocationSmart’s home page lists many partners.

It’s not clear exactly how long LocationSmart has offered its demo service or for how long the service has been so permissive; this link from archive.org suggests it dates back to at least January 2017. This link from The Internet Archive suggests the service may have existed under a different company name — loc-aid.com — since mid-2011, but it’s unclear if that service used the same code. Loc-aid.com is one of four other sites hosted on the same server as locationsmart.com, according to Domaintools.com.

LocationSmart’s privacy policy says the company has security measures in place…”to protect our site from the loss or misuse of information that we have collected. Our servers are protected by firewalls and are physically located in secure data facilities to further increase security. While no computer is 100% safe from outside attacks, we believe that the steps we have taken to protect your personal information drastically reduce the likelihood of security problems to a level appropriate to the type of information involved.”

But these assurances may ring hollow to anyone with a cell phone who’s concerned about having their physical location revealed at any time. The component of LocationSmart’s Web site that can be abused to look up mobile location data at will is an insecure “application programming interface” or API — an interactive feature designed to display data in response to specific queries by Web site visitors.

Although the LocationSmart’s demo page required users to consent to having their phone located by the service, LocationSmart apparently did nothing to prevent or authenticate direct interaction with the API itself.

API authentication weaknesses are not uncommon, but they can lead to the exposure of sensitive data on a great many people in a short period of time. In April 2018, KrebsOnSecurity broke the story of an API at the Web site of fast-casual bakery chain PaneraBread.com that exposed the names, email and physical addresses, birthdays and last four digits of credit cards on file for tens of millions of customers who’d signed up for an account at PaneraBread to order food online.

In a May 9 letter sent to the top four wireless carriers and to the U.S. Federal Communications Commission in the wake of revelations about Securus’ alleged practices, Sen. Ron Wyden (D-Ore.) urged all parties to take “proactive steps to prevent the unrestricted disclosure and potential abuse of private customer data.”

“Securus informed my office that it purchases real-time location information on AT&T’s customers — through a third party location aggregator that has a commercial relationship with the major wireless carriers — and routinely shares that information with its government clients,” Wyden wrote. “This practice skirts wireless carrier’s legal obligation to be the sole conduit by which the government may conduct surveillance of Americans’ phone records, and needlessly exposes millions of Americans to potential abuse and unchecked surveillance by the government.”

Securus, which reportedly gets its cell phone location data from LocationSmart, told The New York Times that it requires customers to upload a legal document — such as a warrant or affidavit — and to certify that the activity was authorized. But in his letter, Wyden said “senior officials from Securus have confirmed to my office that it never checks the legitimacy of those uploaded documents to determine whether they are in fact court orders and has dismissed suggestions that it is obligated to do so.”

Securus did not respond to requests for comment.

THE CARRIERS RESPOND

It remains unclear what, if anything, AT&T, Sprint, T-Mobile and Verizon plan to do about any of this. A third-party firm leaking customer location information not only would almost certainly violate each mobile providers own stated privacy policies, but the real-time exposure of this data poses serious privacy and security risks for virtually all U.S. mobile customers (and perhaps beyond, although all my willing subjects were inside the United States).

None of the major carriers would confirm or deny a formal business relationship with LocationSmart, despite LocationSmart listing them each by corporate logo on its Web site.

AT&T spokesperson Jim Greer said AT&T does not permit the sharing of location information without customer consent or a demand from law enforcement.

“If we learn that a vendor does not adhere to our policy we will take appropriate action,” Greer said.

T-Mobile referred me to their privacy policy, which says T-Mobile follows the “best practices” document (PDF) for subscriber location data as laid out by the CTIA, the international association for the wireless telecommunications industry.

A T-Mobile spokesperson said that after receiving Sen. Wyden’s letter, the company quickly shut down any transaction of customer location data to Securus.

“We are continuing to investigate this matter,” a T-Mobile spokesperson wrote via email. T-Mobile has not yet responded to requests specifically about LocationSmart.

Verizon also referred me to their privacy policy.

Sprint officials shared the following statement:

“Protecting our customers’ privacy and security is a top priority, and we are transparent about our Privacy Policy. To be clear, we do not share or sell consumers’ sensitive information to third parties. We share personally identifiable geo-location information only with customer consent or in response to a lawful request such as a validated court order from law enforcement.”

“We will answer the questions raised in Sen. Wyden’s letter directly through appropriate channels. However, it is important to note that Sprint’s relationship with Securus does not include data sharing, and is limited to supporting efforts to curb unlawful use of contraband cellphones in correctional facilities.”

WHAT NOW?

Stephanie Lacambra, a staff attorney with the the nonprofit Electronic Frontier Foundation, said that wireless customers in the United States cannot opt out of location tracking by their own mobile providers. For starters, carriers constantly use this information to provide more reliable service to the customers. Also, by law wireless companies need to be able to ascertain at any time the approximate location of a customer’s phone in order to comply with emergency 911 regulations.

But unless and until Congress and federal regulators make it more clear how and whether customer location information can be shared with third-parties, mobile device customers may continue to have their location information potentially exposed by a host of third-party companies, Lacambra said.

“This is precisely why we have lobbied so hard for robust privacy protections for location information,” she said. “It really should be only that law enforcement is required to get a warrant for this stuff, and that’s the rule we’ve been trying to push for.”

Chris Calabrese is vice president of the Center for Democracy & Technology, a policy think tank in Washington, D.C. Calabrese said the current rules about mobile subscriber location information are governed by the Electronic Communications Privacy Act (ECPA), a law passed in 1986 that hasn’t been substantially updated since.

“The law here is really out of date,” Calabrese said. “But I think any processes that involve going to third parties who don’t verify that it’s a lawful or law enforcement request — and that don’t make sure the evidence behind that request is legitimate — are hugely problematic and they’re major privacy violations.”

“I would be very surprised if any mobile carrier doesn’t think location information should be treated sensitively, and I’m sure none of them want this information to be made public,” Calabrese continued. “My guess is the carriers are going to come down hard on this, because it’s sort of their worst nightmare come true. We all know that cell phones are portable tracking devices. There’s a sort of an implicit deal where we’re okay with it because we get lots of benefits from it, but we all also assume this information should be protected. But when it isn’t, that presents a major problem and I think these examples would be a spur for some sort of legislative intervention if they weren’t fixed very quickly.”

For his part, Xiao says we’re likely to see more leaks from location tracking companies like Securus and LocationSmart as long as the mobile carriers are providing third party companies any access to customer location information.

“We’re going to continue to see breaches like this happen until access to this data can be much more tightly controlled,” he said.

Read the whole story
aranth
123 days ago
reply
I'm sure the lowest bidding contractor that administers government backdoors would never be this lax on security.
Share this story
Delete
Next Page of Stories