• : Function ereg() is deprecated in /home/morelweb/public_html/includes/ on line 645.
  • : Function ereg() is deprecated in /home/morelweb/public_html/includes/ on line 645.
  • : Function ereg() is deprecated in /home/morelweb/public_html/includes/ on line 645.
  • : Function ereg() is deprecated in /home/morelweb/public_html/includes/ on line 645.
  • : Function ereg() is deprecated in /home/morelweb/public_html/includes/ on line 645.
  • : Function ereg() is deprecated in /home/morelweb/public_html/includes/ on line 645.
  • : Function ereg() is deprecated in /home/morelweb/public_html/includes/ on line 645.
  • : Function ereg() is deprecated in /home/morelweb/public_html/includes/ on line 645.
  • : Function ereg() is deprecated in /home/morelweb/public_html/includes/ on line 645.
  • : Function ereg() is deprecated in /home/morelweb/public_html/includes/ on line 645.
Syndicate content Techdirt.
Easily digestible tech news...
Updated: 15 weeks 1 day ago

While Facebook Gets All The Hate, Verizon Continues To Show It's No Better, And Potentially Much Worse For Privacy

Wed, 2018-05-02 11:54

Facebook certainly deserves ample criticism for its lax privacy standards and its decision to threaten news outlets that exposed them. That said, we've noted a few times now that the uneven press fixation on Facebook obscures the fact that numerous industries routinely engage in much worse behavior. That's particularly true of broadband providers (and especially wireless carriers), who routinely treat consumer privacy as a distant afterthought, with only a fraction of the total volume of media hyperventilation we saw during the Facebook kerfuffle.

Facebook's casual treatment of your data isn't some errant tech industry exception, it's the norm, making #quitFacebook an arguably pointless gesture if you still own a stock mobile phone. In the telecom industry, a disdain for consumer privacy is a cornerstone of their entire business model(s). Companies like AT&T and Verizon aren't just bone grafted to our government's domestic surveillance apparatus, they collect and sell everything from browsing to location data to absolutely anyone and everyone--with little to no real oversight, and opt out tools that may or may not actually work.

Verizon has been particularly busy on the anti-privacy front. You'll recall that the company was fined by the FCC for modifying wireless user data packets to track users around the internet without telling them. The company was engaging in this behavior for two years before security researchers even discovered it, and it took another six months of media criticism for Verizon to offer a simple opt out. Despite the wrist slap, a more powerful variant of this technology is still very much in play at Oath (AOL & Yahoo), Verizon's effort to compete with Google and Facebook in the media advertising wars.

Not long after that, Verizon played a starring role in gutting modest FCC privacy rules protecting consumers (spurred in part by Verizon's tracking tech). Those rules, which Verizon lobbyists dismantled last year, simply required that ISPs be transparent with what data they're collecting and who they're selling it to. When California tried to mirror the FCC's discarded privacy policies, Verizon, Facebook and Comcast lied to lawmakers, falsely claiming that modest privacy protections would harm children, increase internet popups, and embolden extremism. None of it was true.

More recently, Verizon has been facing numerous lawsuits over Yahoo hacks that exposed the data of roughly three billion consumers. And while this was before Verizon's ownership (Verizon wasn't informed of the hacks during negotiations, netting it a $350 billion discount), the company has since been actively trying to prevent customers from suing Oath (Yahoo) or Verizon over future breaches by using fine print to mandate binding arbitration:

"The new Oath terms of service "contain a binding arbitration agreement and class action and jury trial waiver clauses..., which are applicable to all US users," the terms say.

Congress has considered legislation to ban many mandatory arbitration clauses, but it hasn't followed through yet and the practice remains legal.

The AOL terms already contained a binding arbitration clause and class-action waiver before Verizon bought that company. But the Yahoo terms didn't previously contain such clauses."

Thanks to AT&T's Supreme Court victory in 2011 using contract fine print to erode consumer legal rights is now something we view as the norm. And while everybody can agree that the class action system has numerous problems, the system of binding arbitration is a terrible solution. Under binding arbitration, the arbiter rules for the company they work for the vast majority of the time, leaving consumers shit out of luck. While class actions often only net lawyers a nice new boat, they at least occasionally result in substantive change. Arbitration, in turn, is often more like consumer theater than justice.

The reality is that informed and empowered consumers are more likely to opt out of efforts to monetize their online behavior. And however breathlessly companies like Verizon and Facebook pretend to be dedicated to consumer privacy or policy solutions, they're going to fight tooth and nail against any policies -- even reasonable ones -- that could potentially hamstring that revenue. But however bad Facebook is and has been on privacy, Verizon routinely offers a master class when it comes to undermining efforts at anything even vaguely resembling a solution.

Permalink | Comments | Email This Story
Categories: Tech News

Facebook Ranking News Sources By Trust Is A Bad Idea... But No One At Facebook Will Read Our Untrustworthy Analysis

Wed, 2018-05-02 10:40

At some point I need to write a bigger piece on these kinds of things, though I've mentioned it here and there over the past couple of years. For all the complaints about how "bad stuff" is appearing on the big platforms (mainly: Facebook, YouTube, and Twitter), it's depressing how many people think the answer is "well, those platforms should stop the bad stuff." As we've discussed, this is problematic on multiple levels. First, handing over the "content policing" function to these platforms is, well, probably not such a good idea. Historically they've been really bad at it, and there's little reason to think they're going to get any better no matter how much money they throw at artificial intelligence or how many people they hire to moderate content. Second, it requires some sort of objective reality for what's "bad stuff." And that's impossible. One person's bad stuff is another person's good stuff. And almost any decision is going to get criticized by someone or another. It's why suddenly a bunch of foolish people are falsely claiming that these platforms are required by law to be "neutral." (They're not).

But, as more and more pressure is put on these platforms, eventually they feel they have little choice to do something... and inevitably, they try to step up their content policing. The latest, as you may have heard, is that Facebook has started to rank news organizations by trust.

Facebook CEO Mark Zuckerberg said Tuesday that the company has already begun to implement a system that ranks news organizations based on trustworthiness, and promotes or suppresses its content based on that metric.

Zuckerberg said the company has gathered data on how consumers perceive news brands by asking them to identify whether they have heard of various publications and if they trust them.

“We put [that data] into the system, and it is acting as a boost or a suppression, and we’re going to dial up the intensity of that over time," he said. "We feel like we have a responsibility to further [break] down polarization and find common ground.”

But, as with the lack of an objective definition of "bad," you've got the same problem with "trust." For example, I sure don't trust "the system" that Zuckerberg mentions above to do a particularly good job of determining which news sources are trustworthy. And, again, trust is such a subjective concept, that lots of people inherently trust certain sources over others -- even when those sources have long histories of being full of crap. And given how much "trust" is actually driven by "confirmation bias" it's difficult to see how this solution from Facebook will do any good. Take, for example, (totally hypothetically), that Facebook determines that Infowars is untrustworthy. Many people may agree that a site famous for spreading conspiracy theories and pushing sketchy "supplements" that you need because of conspiracy theory x, y or z, is not particularly trustworthy. But, for those who do like Infowars, how are they likely to react to this kind of thing? They're not suddenly going to decide the NY Times and the Wall Street Journal are more trustworthy. They're going to see it as a conspiracy for Facebook to continue to suppress the truth.

Confirmation bias is a hell of a drug, and Facebook trying to push people in one direction is not going to go over well.

To reveal all of this, Zuckerberg apparently invited a bunch of news organizations to talk about it:

Zuckerberg met with a group of news media executives at the Rosewood Sand Hill hotel in Menlo Park after delivering his keynote speech at Facebook’s annual F8 developer conference Tuesday.

The meeting included representatives from BuzzFeed News, the Information, Quartz, the New York Times, CNN, the Wall Street Journal, NBC, Recode, Univision, Barron’s, the Daily Beast, the Economist, HuffPost, Insider, the Atlantic, the New York Post, and others.

We weren't invited. Does that mean Facebook doesn't view us as trustworthy? I guess so. So it seems unlikely that he'll much care about what we have to say, but we'll say it anyway (though you probably won't be able to read this on Facebook):

Facebook: You're Doing It Wrong.

Facebook should never be the arbiter of truth, no matter how much people push it to be. Instead, it can and should be providing tools for its users to have more control. Let them create better filters. Let them apply their own "trust" metrics, or share trust metrics that others create. Or, as we've suggested on the privacy front, open up the system to let third parties come in and offer up their own trust rankings. Will that reinforce some echo chambers and filter bubbles? Perhaps. But that's not Facebook's fault -- it's part of the nature of human beings and confirmation bias.

Or, hey, Facebook could take a real leap forward and move away from being a centralized silo of information and truly disrupt its own setup -- pushing the information and data out to the edges, where the users could have more control over it themselves. And not in the simplistic manner of Facebook's other "big" announcement of the week about how it'll now let users opt-out of Facebook tracking them around the web (leaving out that they kinda needed to do this to deal with the GDPR in the EU). Opting out is one thing -- pushing the actual data control back to the end users and distributing it is something entirely different.

In the early days of the web, people set up their own websites, and had pretty much full control over the data and what was done there. It was much more distributed. Over time we've moved more and more to this silo model in which Facebook is the giant silo where everyone puts their content... and has to play by Facebook's rules. But with that came responsibility on Facebook's part for everything bad that anyone did on their platform. And, hey, let's face it, some people do bad stuff. The answer isn't to force Facebook to police all bad stuff, it should be to move back towards a system where information is more distributed, and we're not pressured into certain content because that same Facebook thinks it will lead to the most "engagement."

Push the content and the data out and focus on the thing that Facebook has always been better at at it's core: the connection function. Connect people, but don't control all of the content. Don't feel the need to police the content. Don't feel the need to decide who's trustworthy and who isn't. Be the protocol, not the platform, and open up the system so that anyone else can provide a trust overlay, and let those overlays compete. It would take Facebook out of the business of having to decide what's good and what's bad and would give end users much more control.

Facebook, of course, seems unlikely to do this. The value of the control is that it allows them to capture more of the money from the attention generated on their platform. But, really, if it doesn't want to keep dealing with these headaches, it seems like the only reasonable way forward.

Permalink | Comments | Email This Story
Categories: Tech News

Daily Deal: Aura Premium Subscription

Wed, 2018-05-02 10:35

How will you improve in 2018? How about prioritizing your mental health? Created by top meditation teachers and therapists, and personalized by ground-breaking AI, Aura Health helps you relieve stress and anxiety by providing short, science-backed mindfulness meditation exercises every day. You can take stress ferociously head on, or you can allow Aura to help you reach a greater equilibrium without breaking the bank or spending excessive amounts of time on complicated exercises. The unlimited subscription is on sale for $79.99.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Permalink | Comments | Email This Story
Categories: Tech News

Japanese Lawyer Sues NTT For Voluntarily Blocking 'Pirate Sites'

Wed, 2018-05-02 09:33

Well, that didn't take long. Over the past few weeks, we have been discussing yet another attempt to introduce a censorious site-blocking program to combat copyright infringement, this time in Japan. While site-blocking is unfortunately now popular in several countries, Japan's attempt at it is interesting in that the Japanese constitution specifically forbids censorship of this kind save for the need to combat very serious, typically deadly instances. What's not arguable is that Japan's constitution intended to allow for a sweeping site-blocking program to combat general copyright infringement. Despite this, and despite the fact that the Japanese government hasn't bothered to actually put any law in place that would institute site-blocking, at least one ISP decided to get a head start and began blocking access to several websites it determined to be "pirate sites." The Nippon Telegraph and Telephone Corp., or NTT, did this while saying the government should still get on crafting an actual law for its actions, despite the obvious unconstitutional nature of the whole enterprise.

Because of its actions, it will be NTT that will face the first legal challenge to site-blocking rather than the government, with a private citizen, who happens to be a lawyer, suing the ISP for invading his privacy in order to censor his access to the internet.

Lawyer Yuichi Nakazawa has now launched legal action against NTT, demanding that the corporation immediately ends its site-blocking operations. The complaint, filed at the Tokyo District Court, notes that the lawyer uses an Internet connection provided by NTT. Crucially, it also states that in order to block access to the sites in question, NTT would need to spy on customers’ Internet connections to find out if they’re trying to access the banned sites.

The lawyer informs TorrentFreak that the ISP’s decision prompted him into action.

“NTT’s decision was made arbitrarily on the site without any legal basis. No matter how legitimate the objective of copyright infringement is, it is very dangerous,” Nakazawa explains.

Regardless of the specific legal arguments in this suit, it's hard to imagine that NTT wouldn't have seen this coming. Operating without a legal framework to unilaterally censor parts of the internet in a country with a commanding federal legal framework hardened against this very thing was putting the ISPs neck out there, to put it mildly. Why NTT wanted to paint a legal target on its own back rather than waiting for the government and courts to sort this out at the federal level is beyond me. One has to imagine that this lawsuit will be the first of many, if NTT decides to carry on blocking websites. Japanese law is quite clear on the matter, after all.

Breaches of privacy could present a significant problem under Japanese law. The Telecommunications Business Act guarantees privacy of communications and prevents censorship, as does Article 21 of the Constitution.

“The secrecy of communications being handled by a telecommunications carrier shall not be violated,” the Telecommunications Business Act states, adding that “no communications being handled by a telecommunications carrier shall be censored.”

The Constitution is also clear, stating that “no censorship shall be maintained, nor shall the secrecy of any means of communication be violated.”

Now, as TorrentFreak notes, how this specific legal action is adjudicated will likely come down to the technical specifics of how NTT is doing its site-blocking. That and, of course, how Japanese courts interpret that technical implementation.

The question of whether site-blocking does indeed represent an invasion of privacy will probably come down to how the ISP implements it and how that is interpreted by the courts.

A source familiar with the situation told TF that spying on user connections is clearly a problem but the deployment of an outer network firewall rule that simply prevents traffic passing through might be viewed differently.

But what is more clear than anything else is that this lawsuit signals that the Japanese public won't simply allow ISPs to unilaterally censoring their internet access. Whatever the technical details, Japanese law would make any introduction of site-blocking a matter of deft attempts at skirting the purpose of the anti-censorship laws on the books rather than fully complying with them.

And if we've already reached the point that it's clear the government and NTT are trying to game the system rather than following the law, the public backlash is likely to be heavy.

Permalink | Comments | Email This Story
Categories: Tech News

Some Comcast Customers Won't Get The Latest Broadband Upgrades Without Buying Cable TV

Wed, 2018-05-02 06:22

As we've often noted, Comcast has been shielded from the cord cutting trend somewhat thanks to its growing monopoly over broadband. As users on slow DSL lines flee telcos that are unwilling to upgrade their damn networks, they're increasingly flocking to cable operators for faster speeds. When they get there, they often bundle TV services; not necessarily because they want it, but because it's intentionally cheaper than buying broadband standalone.

And while Comcast's broadband monopoly has protected it from TV cord cutting somewhat, the rise in streaming competition has slowly eroded that advantage, and Comcast is expected to see see double its usual rate of cord cutting this year according to Wall Street analysts.

Comcast being Comcast, the company has a semi-nefarious plan B. Part of that plan is to abuse its monopoly over broadband to deploy arbitrary and unnecessary usage caps and overage fees. These restrictions are glorified rate hikes applied to non competitive markets, with the added advantage of making streaming video more expensive. It's a punishment for choosing to leave Comcast's walled garden.

But Comcast appears to have discovered another handy trick that involves using its broadband monopoly to hamstring cord cutters. Reports emerged this week that the company is upgrading the speeds of customers in Houston and parts of the Pacific Northwest, but only if they continue to subscribe to traditional cable television. The company's press release casually floats over the fact that only Comcast video customers will see these upgrades for now:

"Speed increases will vary based on the Xfinity Internet customers' current speed subscriptions. Those receiving the speed boost will benefit from an increase of 30 to 40 percent in their download speeds. Existing Xfinity Internet and X1 video customers subscribing to certain packages can expect to experience enhanced speeds this month."

As is usually the case, Comcast simply acted as if this was all just routine promotional experimentation (an argument that only works if you're unfamiliar with Comcast's other efforts to constrain emerging video competition):

"We asked Comcast a few questions, including whether it will make speed increases in other cities contingent on TV subscribership. A Comcast spokesperson didn't answer, but noted, "we test and introduce new bundles all the time." The spokesperson also said that the speed increase for Houston is the second in 2018, after one in January. The Oregon/SW Washington speed increase is apparently the first one this year."

In a healthy market with healthy regulatory oversight, either competition or adult regulatory supervision would prevent Comcast from using its broadband monopoly to constrain consumer video choices. But if you hadn't noticed, the telecom and TV sector and the current crop of regulators overseeing it aren't particularly healthy, and with the looming death of net neutrality you're going to see a whole lot more behavior like this designed to erect artificial barriers to genuine consumer choice and competition.

Permalink | Comments | Email This Story
Categories: Tech News

Another Federal Court Says Compelled Decryption Doesn't Raise Fifth Amendment Issues

Wed, 2018-05-02 03:23

Another federal court is wrestling with compelled decryption and it appears the Fifth Amendment will be no better off by the time it's all over. A federal judge in North Carolina has decided compelling decryption of devices is only a small Fifth Amendment problem -- one that can be overlooked if the government already possesses certain knowledge. [h/t Orin Kerr]

The defendant facing child porn charges requested relief from a magistrate's order to compel decryption. The government isn't asking Ryan Spencer to turn over his passwords. But it wants exactly the same result: decrypted devices. The government's All Writs Order demands Spencer unlock the devices so law enforcement can search their contents. As the court notes in the denial of Spencer's request, the Fifth Amendment doesn't come into play unless the act of production -- in this case, turning over unlocked devices -- is both "testimonial" and "incriminating."

Spencer argued both acts are the same. The government may not ask him directly for his passwords, but a demand he produce unlocked devices accomplishes the same ends. As the court notes, the argument holds "superficial appeal." It actually holds a bit more than that. A previous dissenting opinion on the same topic said the government cannot compel safe combinations by "either word or deed."

This opinion [PDF], however, goes the other way. Judge Breyer likes the wall safe analogy, but arrives at a different conclusion than Justice Stevens did in an earlier dissent. The court finds drawing a Fifth Amendment line at password protection would produce a dichotomy it's not willing to accommodate.

[A] rule that the government can never compel decryption of a password-protected device would lead to absurd results. Whether a defendant would be required to produce a decrypted drive would hinge on whether he protected that drive using a fingerprint key or a password composed of symbols.

The refusal to craft this bright line ultimately makes little difference. The line already exists. Almost no courts have said the compelled production of fingerprints is a Fifth Amendment violation. Producing passwords, however, is an issue that's far from settled. In the cases that have gone the government's way, the key appears to be what the government already knows: the "foregone conclusions." The same goes here.

The court admits producing unlocked devices strengthens the government's case even before any searches take place.

So: the government’s request for the decrypted devices requires an act of production. Nevertheless, this act may represent incriminating testimony within the meaning of the Fifth Amendment because it would amount to a representation that Spencer has the ability to decrypt the devices. See Fisher, 425 U.S. at 410. Such a statement would potentially be incriminating because having that ability makes it more likely that Spencer encrypted the devices, which in turn makes it more likely that he himself put the sought-after material on the devices.

But that only deals with the incrimination side. Is it testimonial? The court thinks it isn't. Or at least, it believes whatever testimonial value it adds is almost nonexistent. All the government needs to show is that the defendant has the ability to unlock the devices.

Turning over the decrypted devices would not be tantamount to an admission that specific files, or any files for that matter, are stored on the devices, because the government has not asked for any specific files. Accordingly, the government need only show it is a foregone conclusion that Spencer has the ability to decrypt the devices.

It's a low bar but one that's sometimes difficult to reach if the government can't clearly link the defendant to the locked devices obtained during the search of a residence or business. As the court notes, it requires more than a reasonable assumption that files the government seeks might reside on the locked devices.

But it is nonsensical to ask whether the government has established with “reasonable particularity” that the defendant is able to decrypt a device. While physical evidence may be described with more or less specificity with respect to both appearance and location, a defendant’s ability to decrypt is not subject to the same sliding scale. He is either able to do so, or he is not. Accordingly, the reasonable particularity standard cannot apply to a defendant’s ability to decrypt a device.

The government needs far more if it seeks to compel decryption.

The appropriate standard is instead clear and convincing evidence. This places a high burden on the government to demonstrate that the defendant’s ability to decrypt the device at issue is a foregone conclusion. But a high burden is appropriate given that the “foregone conclusion” rule is an exception to the Fifth Amendment’s otherwise jealous protection of the privilege against giving self-incriminating testimony.

And the court finds the government does possess clear, convincing evidence.

All three devices were found in Spencer’s residence. Spencer has conceded that he owns the phone and laptop, and has provided the login passwords to both. Moreover, he has conceded that he purchased and encrypted an external hard drive matching the description of the one found by the government. This is sufficient for the government to meet its evidentiary burden. The government may therefore compel Spencer to decrypt the devices.

There is one caveat, however.

Once Spencer decrypts the devices, however, the government may not make direct use of the evidence that he has done so.

As the court points out, if the government's foregone conclusion is the correct conclusion, additional evidence linking Spencer to the locked devices will be unnecessary. The government should have no use for the testimony inherent in the act -- the concession that Spencer owned and controlled the now-unlocked devices, making him ultimately criminally responsible for any evidence located in them.

In terms of compelled production, passwords continue to beat fingerprints for device security, but only barely.

Permalink | Comments | Email This Story
Categories: Tech News

Princeton Project Aims To Secure The Internet Of Broken, Shitty Things

Tue, 2018-05-01 19:50

Year after year, we're installing millions upon millions of "internet of things" devices on home and business networks that have only a fleeting regard for security or privacy. The width and depth of manufacturer incompetence on display can't be understated. Thermostats that prevent you from actually heating your home. Smart door locks that make you less secure. Refrigerators that leak Gmail credentials. Children's toys that listen to your kids' prattle, then (poorly) secure said prattle in the cloud. Cars that could, potentially, result in your death.

The list goes on and on, and it grows exponentially by the week, especially as such devices are quickly compromised and integrated into massive new botnets. And as several security experts have noted, nobody in this chain of dysfunction has the slightest interest in doing much about this massive rise in "invisible pollution":

"The market can't fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don't care. Their devices were cheap to buy, they still work, and they don't even know Brian. The sellers of those devices don't care: they're now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it's an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution."

One core part of the problem is that IOT device makers refuse to provide much control or transparency over what their internet-connected devices actually do once online. Often the tools and device interfaces provided to the end user are comically simple, providing you with virtually no data on how much bandwidth your devices are consuming, or what data they're transferring back to the cloud (frequently unencrypted). As a result, many normal people are participating in historically massive DDOS attacks or having their every behavior monitored without having the slightest idea it's actually occurring.

To that end Princeton's computer science department has launched a research program called the IOT Inspector they hope will provide users with a little more insight into what IOT devices are actually up to. The researchers behind the project say they spent some time analyzing fifty different common IOT devices, and like previous studies found that security and privacy in these devices was a total shitshow. Sending private user data unencrypted back to the cloud was common:

Unfortunately, many of the devices we have examined lack even these basic security or privacy features. For example, the Withings Smart Blood Pressure Monitor included the brand of the device and the string “blood pressure” in unencrypted HTTP GET request headers. This allows a network eavesdropper to (1) learn that someone in a household owns a blood pressure monitor and (2) determine how frequently the monitor is used based on the frequency of requests. It would be simple to hide this information with SSL."

As were devices that immediately began chatting with all manner of partner services whether the user wants them to or not:

Samsung Smart TV: During the first minute after power-on, the TV talks to Google Play, Double Click, Netflix, FandangoNOW, Spotify, CBS, MSNBC, NFL, Deezer, and Facebook—even though we did not sign in or create accounts with any of them.

Again, user control and transparency is almost always an afterthought. Obviously, the creation of some unified standards is one solution. As is creating routers and hardware that alert users to when their devices have been compromised. Smarter networks and hardware are going to need to be a cornerstone of any proposed solution, the researchers note:

We are experimenting with machine learning-based DDoS detection using features using IoT-specific network behaviors (e.g., limited number of endpoints and regular time intervals between packets). Preliminary results indicate that home gateway routers or other network middleboxes could automatically detect local IoT device sources of DDoS attacks with high accuracy using low-cost machine learning algorithms.

Of course better standards are going to need to be built on the backs of a joint collaboration between governments, companies, consumers and researchers. And while we've seen mixed results on that front so far, efforts like this (and the Consumer Reports' open source attempt to make privacy and security an integral part of product reviews) are definitely a step in the right direction.

Permalink | Comments | Email This Story
Categories: Tech News

Suburban Express Sued By Illinois Attorney General For Behaving Like Suburban Express

Tue, 2018-05-01 15:29

We've talked quite a bit about Surban Express in these pages. The bus company chiefly works the Illinois university circuit, bussing students and others between the schools and transportation hubs like O'Hare Airport. In addition, the company also regularly sues any customers critical of its services, occasionally runs away from those suits, then refiles them, all while owner Dennis Toeppen harasses and publicly calls out these customers on the company website and its social media accounts. Also, the company has a deep history of treating non-white customers differently and poorly than others, culminating in a recent advertisement it sent out promising riders that they won't feel like they're in China when on its buses (the University of IL has a sizable Asian student population). After that advertisement, Illinois Attorney General Lisa Madigan announced an investigation into the company's practices, prompting Suburban Express to apologize several times for the ad.

Well, if Toeppen had hoped those apologies would keep the AG at bay, it didn't work. Madigan has now sued the company in Chicago for discriminatory behavior and the mistreatment of its customers.

The lawsuit, filed in U.S. District Court in Chicago, seeks a restraining order against the company to stop it from publishing customers’ financial information, halt harassment and prevent the company from forcing riders to accept unfair contract terms. If the company does not change its practices, Madigan said, the attorney general wants the company out of business.

The company’s actions, Madigan said, constitute “flagrant and numerous violations” of Illinois’ civil rights and consumer protection laws.

“My lawsuit alleges that Suburban Express has long been engaged in illegal discrimination and harassment of college students in Illinois, particularly University of Illinois students and their families,” Madigan said at a morning news conference at the Thompson Center to announce the lawsuit.

Among the allegations is that Suburban Express harasses its critics, publishes some of their financial information in an attempt to shame them, discriminates against customers based on their race, and generally tries to make the lives of anyone that doesn't love the services they get a living hell. All of this followed a months-long investigation into the actions of the company and Toeppen himself.

In response, Suburban Express posted to its Facebook page that it merely defends itself against lying critics, before suggesting how awesome it is.

"Defending ourselves against online harrassment (sic) does not constitute harrassment (sic) of the harrasser. (sic) The complaint seems to demonstrate a lack of any sense of humor on the part of Attorney General Madigan. Tongue in cheek posts like the picture of bowing passengers cannot reasonably be inferred to mean that we have something against certain customers."

“The world is a better place as a result of Suburban Express. … We take this unfounded assault on our reputation seriously and we intend to defend this lawsuit vigorously,” the post concluded. “We’d love to hear from attorneys interested in defending us against this lawsuit.”

What attorneys will rush to the side of a company that has so clearly demonstrated exactly who and what it is will be interesting to watch. Part of Suburban Express' problem is that it engaged in so much of this harassment online, where the slate can never be truly scrubbed, and with which the AG will be able to present the court with the company's own words and actions.

Given the long history of public behavior by the company, it's hard to imagine how any of this goes well for it.

Permalink | Comments | Email This Story
Categories: Tech News

Techdirt Podcast Episode 165: Is 'Free' Bad?

Tue, 2018-05-01 13:30

In the last few years, a lot of the conversation around technology in general has shifted its focus from excitement about the obvious benefits to concern about its downfalls and side effects. It even feels like there's a general sense that "technology is bad for society" in a lot of places. This comes with a lot of associated myths, including the prominent idea that "if you're not paying for something, you're the product being sold" — an idea that is, at best a massive oversimplification. So on this week's podcast we're discussing the changing cultural attitudes towards technology, especially free online services and the many myths and misunderstandings about how they operate.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Permalink | Comments | Email This Story
Categories: Tech News

Two-Man Police Department Acquires $1 Million In Military Gear

Tue, 2018-05-01 11:58

An ultra-safe Michigan town of 6,800 has claimed more than $1 million in military equipment through the Defense Department's 1033 program. The program allows law enforcement agencies to obtain anything from file cabinets to mine-resistant assault vehicles for next to nothing provided the agencies can show a need for the equipment. Most can "show" a "need," since it's pretty easy to type something up about existential terrorist/drug threats. Boilerplate can be adjusted as needed, but for the most part, requests are granted and oversight -- either at the federal and local level -- is almost nonexistent.

This has come to a head in Thetford Township, the fourth-safest municipality in Michigan, and home to more than $1 million in military gear and two (2) police officers.

The free material, received through a federal program, includes mine detectors and Humvees, tractors and backhoes, hydroseeders and forklifts, motorized carts and a riding lawnmower. The landlocked township also has gotten boat motors and dive boots.

While much of the gear worth $1 million has never been used by the township, some has been given to residents, township officials said.

The township supervisor and a trustee said the police have stymied their attempts to find out what equipment they have, where it’s located and why some of it has been given away. The police didn’t keep track of what they had or what they had given away, according to a township audit last year.

A belated, half-hearted audit by Police Chief Bob Kenny (supervisor of one [1] police officer) showed his department had acquired 950 pieces of equipment, including a couple of Humvees, three ATVs, a tractor, a forklift, and a number of other vehicles. More than 300 items are stored "off-site," which apparently means parked on private property and used by private citizens.

Town supervisor Gary Stevens has been trying to get to the bottom of this outsized stockpile. But he's running into resistance. Supporters of the town's two-person police force (and apparent beneficiaries of the federal program) have been pushing back. A recall campaign has been started by farmer Eugene Lehr, who has 21 pieces of military surplus equipment on his property.

A nearby sheriff's department has stepped in to perform an independent audit but has yet to release its findings. Equipment has apparently been given to citizens but no paper trail exists to track who ended up with what and how much may have been sold by the department. But what's left is still impressive. A two-man department somehow justified the acquisition of seven trucks and nine trailers over the last decade, in addition to everything else the department has stockpiled since Bob Kenny became chief.

While it may seem like most of the acquisitions are innocuous -- not the sort of thing one associates with a militarized police force -- the fact remains the program has almost zero oversight. Not until after more than $1 million in equipment was routed to a place that did not have a pressing need for the items did the DoD finally step in and suspend the department's participation in the program. Equipment that may have been put to better use elsewhere is parked on private property or has simply vanished into thin air. This is a waste of tax dollars that does nothing to make policing better or a safe township even safer.

Permalink | Comments | Email This Story
Categories: Tech News

Germany's Supreme Court Confirms That Adblocking Is Legal, In Sixth Consecutive Defeat For Publishers

Tue, 2018-05-01 10:42

Adblocking is something that many people feel strongly about, as the large number of comments on previous posts dealing with the topic indicates. Publishers, too, have strong feelings here, including the belief that they have a right to make people view the ads they carry on their sites. (Techdirt, of course, has a rather different position.) In Germany, publishers have sued the makers of AdBlock Plus no less than five times -- and lost every case. It will not surprise Techdirt readers to learn that those persistent defeats did not stop the German media publishing giant Axel Springer from trying yet again, at Germany's Supreme Court. It has just lost. As Adblock Plus explains in a justifiably triumphant blog post:

This ruling confirms -- just as the regional courts in Munich and Hamburg stated previously -- that people have the right in Germany to block ads. This case had already been tried in the Cologne Regional Court, then in the Regional Court of Appeals, also in Cologne -- with similar results. It also confirms that Adblock Plus can use a whitelist to allow certain acceptable ads through.

Reuters notes that Springer's case was just the first of five against Adblock Plus to reach the Supreme Court in Germany, although the others are presumably moot in the light of this definitive decision. However, that does not mean Springer is giving up. There remains one final option:

Springer said it would appeal to the [German] Constitutional Court on the grounds that adblockers violated press freedom by disrupting online media and their financial viability.

Yes, that's right: if you are using an adblocker, you are a bad person, who hates press freedom....

Follow me @glynmoody on Twitter or, and +glynmoody on Google+.

Permalink | Comments | Email This Story
Categories: Tech News

Daily Deal: 2018 Essential JavaScript Coding Bundle

Tue, 2018-05-01 10:37

JavaScript provides web developers with the knowledge to program more intelligently and idiomatically, and the 2018 Essential JavaScript Coding Bundle will help you explore the best practices for building an original, functional, and useful cross-platform library. With 8 online courses and 3 ebooks, you'll have the ultimate guide to JavaScript. Topics covered include Angular 2, Vue.js, Node, Redux, and more. The bundle is on sale for $29.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Permalink | Comments | Email This Story
Categories: Tech News

High Court Says UK Government Can No Longer Collect Internet Data In Bulk

Tue, 2018-05-01 09:36

UK civil liberties group Liberty has won a significant legal battle against the Snoopers Charter. A recent ruling [PDF] by the UK High Court says the data retention provisions, which include mandated extended storage of things like web browsing history by ISPs, are incompatible with EU privacy laws.

The court found the data retention provisions are at odds with civil liberties protections for a couple of reasons. First, the oversight is too limited to be considered protective of human rights asserted by the EU governing body. As the law stands now, demands for data don't require independent oversight or authorization.

Second, even though the Charter claims demands for data will be limited to "serious crimes," the actual wording shows there are no practical limitations preventing the government from accessing this data for nearly any reason at all.

The decision quotes the Charter's stated reasons for obtaining data, which range from "public safety," to "preventing disorder" to "assessing or collecting taxes." Obviously, the broad surveillance powers will not be limited to "serious crimes," contrary to the government's assertions in court.

First, the wording of the draft declaration is so broad that it would include areas which are outside (or potentially outside) the area of serious crime: for example, the area of national security. As will become apparent later, the issue of whether the area of national security falls within the scope of EU law at all is the subject of dispute between the parties.

The second sentence refers to the government's argument: that UK national security concerns trump European law. Unfortunately, the High Court does not provide an answer as to whether UK law can ignore CJEU decisions when it comes to securing the nation. This will have to wait until after a decision is handed down in another challenge to the surveillance law.

[I]n our view, although the terms of section 94 of the 1984 Act and the terms of Part 4 of the 2016 Act are not identical, the questions which have been referred by the IPT are not confined to the precise scope of section 94. Rather they raise broader questions about the scope of EU law, having regard to Article 4 TEU and Article 1(3) of the e-Privacy Directive; and also raise the particular question of whether any of the Watson CJEU requirements apply in the field of national security.

For those reasons we refuse the application by the Claimant to make a reference to the CJEU on this question. This part of this claim will be stayed pending the CJEU’s decision in the reference in the Privacy International case.

In the end, the court decides this part of the Snoopers Charter must be stricken and rewritten to comply with EU privacy protections. The UK government has six months to fix the law. Until that point, it appears UK agencies will still be able to demand data in bulk under the Charter draft. Once the fixes are in and enacted, bulk collections of internet browsing data and communications metadata will cease… at least until the UK exits the European Union.

Permalink | Comments | Email This Story
Categories: Tech News

Sprint, T-Mobile Try To Sell The Public On A Job-Killing, Competition Eroding Megamerger

Tue, 2018-05-01 06:21

Sprint and T-Mobile are once again talking megamerger. The two companies tried to merge in 2014, but had their romantic entanglements blocked by regulators who (quite correctly) worried that the elimination of one of just four major players in the space would eliminate jobs, reduce competition and drive up costs for consumers. Emboldened by the Trump FCC's rubber stamping of industry desires, the two companies again spent much of last year talking about a potential tie up, though those efforts were ultimately scuttled after the two sides couldn't agree on who'd get to run the combined entity.

But the two companies appear to have settled their disagreements, and over the weekend announced they'd be attempting to merge once again as part of a $26 billion deal. Executives for both companies spent most of the weekend trying to convince the public that dramatically reducing competitors in the sector would magically somehow create more competition:

By coming together with @TMobile, we’ll drive competition, lower prices, accelerate disruption and spur innovation to make America the true leader in mobile #5G. #5GForAll $S $TMUS

— MarceloClaure (@marceloclaure) April 29, 2018

Of course that's not how competition works. While T-Mobile has had a net positive impact on the wireless sector on things like hidden fees and absurd international roaming costs, the four major carriers had already been backing away from promotions so far this year as they try to avoid something the telecom sector loathes: genuine price competition. As our friends in Canada can attest, reducing the overall number of major competitors from four to three only reduces the incentive for real price competition even further. It's simply not debatable.

And while the two companies are trying to claim that Sprint couldn't have survived on its own, that's not really true. The company's debt load is notable, but with Japanese owner Softbank the company had slowly but surely been getting a handle on its finances. And if a deal was inevitable for survival, there's plenty of potential merger partners (from Dish Networks to a major cable company like Charter Spectrum) that could have been pursued without eliminating a major competitor.

The two companies are also amusingly trying to claim that the deal will somehow create jobs:

Nothing will happen for a while, but after the deal closes, it will open thousands of new jobs! Key details here:

— John Legere (@JohnLegere) April 29, 2018

And while that's adorable salesmanship, it's indisputably false. History has proven time, and time, and time again that such consolidation in telecom erodes competition, jobs, and quality service. Mindless M&A mania is a primary reason why you all loathe Comcast, since growth for growth's sake consistently means service quality takes a back seat.

Wall Street analysts had previously predicted that a tie up between the two companies could result in the elimination of anywhere from 10,000 to 30,000 jobs (the latter being more than Sprint even currently employs) as redundant retail locations, middle managers, and engineers are inevitably dismissed. And while both companies are spouting the usual lines about how "nothing will really change," anybody that has lived through a deal like this one (or, say, just paid attention to history) should realize the folly of such claims.

Whether the deal will be approved by the Trump administration is uncertain. While the Ajit Pai run FCC has made it abundantly clear it's willing to rubber stamp every fleeting sector desire regardless of its impact (net neutrality, privacy), the Trump DOJ has become a bit of a wildcard in the wake of its lawsuit to thwart the AT&T Time Warner merger. Some analysts see the deal as having only a 40% chance of approval, though Sprint and T-Mobile are trying their best to pander to the Trump admin by claiming that the miracles of next-gen wireless (5G) can only arrive if they're allowed to merge.

But there's a reason both companies announced the deal on a Sunday when everybody was napping or tending to the lawn. There's also a reason they're trying to rush this deal through now before adult regulatory supervision inevitably returns at the FCC. And that's again because this deal, like so many telecom sector megadeals before it, will only benefit investors and shareholders, not the public or the internet at large. Since companies can't admit that these deals are largely harmful to anybody but themselves, we get obnoxious sales pitches that aggressively ignore common sense -- and history.

Permalink | Comments | Email This Story
Categories: Tech News

Police Use Genealogy Site To Locate Murder Suspect They'd Been Hunting For More Than 30 Years

Tue, 2018-05-01 03:23

DNA isn't the perfect forensic tool, but it's slightly preferable to the body of junk science prosecutors use to lock people up. It's ability to pinpoint individuals is overstated, and the possibility of contamination makes it just as easy to lock up innocent people as garbage theories like bite mark matching.

In terms of process of elimination, it's still a go-to for prosecutors. The rise of affordable DNA testing has provided a wealth of evidence to law enforcement. Investigators are no longer limited to samples they've taken from arrestees. Databases full of DNA info are within reach 24 hours a day -- and all law enforcement needs is an account and a few bucks to start tracking down DNA matches from members of the public who've never been arrested.

Investigators used DNA from crime scenes that had been stored all these years and plugged the genetic profile of the suspected assailant into an online genealogy database. One such service, GEDmatch, said in a statement on Friday that law enforcement officials had used its database to crack the case. Officers found distant relatives of Mr. DeAngelo’s and, despite his years of eluding the authorities, traced their DNA to his front door.

“We found a person that was the right age and lived in this area — and that was Mr. DeAngelo,” said Steve Grippi, the assistant chief in the Sacramento district attorney’s office.

This "search" may possibly close the books on at least ten unsolved murders featuring the same suspect DNA. The process involved, however, raises questions. But customers of companies like GEDmatch and 23andMe probably won't like the answers. Any ethical questions they may have about companies sharing DNA info with law enforcement is likely covered by the terms of service. Customers looking to the Bill of Rights may be disappointed to discover the courts have little positive to say about Fourth Amendment protections for third party records.

Adding your DNA to these databases makes this info publicly-available. If everyone's DNA was siloed off from everyone else's, genealogy services would be completely useless. It's expected your DNA info will be shared with others. If "others" includes law enforcement, the terms of service have that eventuality covered. Even if other uses of your DNA weren't specified, there's nothing illegal about law enforcement agencies creating accounts to submit DNA for matches. If there's a Constitutional challenge, the third party doctrine likely eliminates anything remaining for the court to consider once it gets past the obvious hurdle: DNA-matching services match DNA. Complete strangers are able to "access" DNA info of others without creating privacy issues.

GEDmatch's response to all of this? If you don't want your DNA to end up in the hands of law enforcement, delete your account. This isn't exactly customer-friendly, but it reflects the reality of participating in a service that offers DNA matching. Even if a company refuses to hand over info voluntarily, it probably wouldn't take more than a subpoena to knock it loose. As long as law enforcement is using the system like a customer would -- that is, simply submitting DNA for a match -- the only problems it poses are at the far end of the ethical spectrum. If it's doing anything else -- like asking companies to notify them if certain DNA samples are submitted -- then there are problems. But as long as it's not inserting itself into the supply chain, there's really no privacy invasion occurring.

Permalink | Comments | Email This Story
Categories: Tech News

Device Detects Drug Use Through Fingerprints, Raising A Host Of Constitutional Questions

Mon, 2018-04-30 19:36

If this tech becomes a routine part of law enforcement loadouts, judicial Fourth and Fifth Amendment findings are going to be upended. Or, at least, they should be. I guess citizens will just have to see how this all shakes out.

A raft of sensitive new fingerprint-analysis techniques is proving to be a potentially powerful, and in some cases worrying, new avenue for extracting intimate personal information—including what drugs a person has used.


The new methods use biometrics to analyze biochemical traces in sweat found along the ridges of a fingerprint. And those trace chemicals can quickly reveal whether you have ingested cocaine, opiates, marijuana, or other drugs. One novel, noninvasive forensic technique developed by researchers at the University of Surrey in the United Kingdom can detect cocaine and opiate use from a fingerprint in as little as 30 seconds. The team collected 160 fingerprint samples from 16 individuals at a drug-treatment center who had used cocaine within the past 24 hours—confirmed by saliva testing—along with 80 samples from non-users. The assay—which was so sensitive that it could still detect trace amounts of cocaine after subjects washed their hands with soap—correctly identified 99 percent of the users, and gave false positive results for just 2.5 percent of the nonusers, according to a paper published in Clinical Chemistry.

Let's discuss the phrase "non-invasive." It was relatively non-invasive when fingerprints were simply used to identify people. (That science isn't exactly settled, but we'll set that aside for now.) When smartphones and other devices used fingerprint scanners for ID, the "non-invasive" application of fingerprints was no longer non-invasive. An identifying mark, possessing no Fifth Amendment protection, gave law enforcement and prosecutors the option of using something deemed "non-testimonial" to obtain plenty of evidence to be used against the fingerprinted.

This opens up a whole new Constitutional Pandora's Box by giving officers the potential to apply fingerprints during traffic stops to see if they can't generate enough probable cause to perform a warrantless search of the car and everyone in it. It's generally criminal to possess drugs. Evidence of ingested drugs means suspects possessed them at some point in time, but evidence of drug use is generally only useful in driving under the influence cases. That's in terms of prosecutions, though. For roadside searches -- where officers so very frequently "smell marijuana" -- evidence of drug use is a free pass for warrantless searches.

That's just the Fourth Amendment side. The Fifth Amendment side is its own animal. Evidence obtained through fingerprints would seemingly make the production of fingerprints subject to Fifth Amendment protections. It should at least rise to the level of blood draws and breath tests, even though this is far more intrusive (in terms of evidence obtained) than tech normally deployed at DUI checkpoints. Blood draws often require warrants. Breath tests, depending on surrounding circumstances, aren't nearly as settled, with courts often finding obtaining carbon dioxide from breathing humans to be minimally testimonial.

As Scott Greenfield points out, the first tests of constitutionality will occur at street level. Cops will deploy the tech, hoping to good faith their way past constitutional challenges.

Precedent holds that the police are authorized to seize people’s fingerprints upon arrest, as the Fifth Amendment does not apply to physical characteristics. But the rubric is “fingerprints can be seized” based on their limited utility as physical characteristics used for identification purposes.

If they should be used for entirely different purposes, for the ascertainment of whether a person ingested drugs, then the rationale allowing the seizure of prints under the Fifth Amendment no longer applies. It certainly won’t be in the cops’ best interests to draw this distinction, to limit their use of prints to the purpose for which they’re allowed and to demonstrate constitutional restraint by not exceeding that purpose.

This means everything will get much worse for drivers and other recipients of law enforcement attention in the short-term. When the challenges to searches and seizures filter their way up through the court system, things might improve. But it won't happen rapidly and any judges leaning towards redefining the scope of fingerprint use will face strong government challenges.

It will probably be argued evidence of drug use obtained through these devices is no different than a cop catching a whiff of marijuana. On one hand, no cop could credibly claim to be able to detect drug use simply by touching someone's fingers. On the other hand, the reasonable reliability of the tech makes challenges more difficult than arguing against an officer's claim they smelled drugs during the traffic stop. The key may be predicating a challenge on the fact that the device actually tests sweat, not fingerprints, making it an issue of bodily fluids again and (slightly) raising the bar for law enforcement.

This news isn't disturbing for what it is. The obvious initial application is in workplaces, where random drug tests are standard policies for many companies. That tech advancements would progress to this point -- a 10-minute test that requires only the momentary placement of a finger on a test strip -- was inevitable. It's what comes after that will be significant. Courts have often cut law enforcement a lot of slack and tend to lag far behind tech developments and their implications on Constitutional rights. A new way to obtain evidence using something courts generally don't consider to be testimonial is going to disrupt the Constitution. Hopefully, the courts will recognize the distinction between identification and evidence and rule appropriately.

Permalink | Comments | Email This Story
Categories: Tech News

USPTO Suggests That AI Algorithms Are Patentable, Leading To A Whole Host Of IP And Ethics Questions

Mon, 2018-04-30 15:44

The world is slowly but surely marching towards newer and better forms of artificial intelligence, with some of the world's most prominent technology companies and governments heavily investing in it. While limited or specialist AI is the current focus of many of these companies, building what is essentially single-trick intelligent systems to address limited problems and tasks, the real prize at the end of this rainbow is an artificial general intelligence. When an AGI could be achieved is still squarely up in the air, but many believe this to be a question of when, not if, such an intelligence is created. Surrounding that are questions of ethics that largely center on whether an AGI would be truly sentient and conscious, and what that would imply about our obligations to such a mechanical being.

Strangely, patent law is being forcibly injected into this ethical equation, as the USPTO has come out in favor of the algorithms governing AI and AGI being patentable.

Andrei Iancu, director of the U.S. Patent and Trademark Office (USPTO), says that the courts have strayed on the issue of patent eligibility, including signaling he thought algorithms using artificial intelligence were patentable as a general proposition.

That came in a USPTO oversight hearing Wednesday (April 18) before a generally supportive Senate Judiciary Committee panel.

Both Iancu and the legislators were in agreement that more clarity was needed in the area of computer-related patents, and that PTO needed to provide more precedential opinions when issuing patents so it was not trying to reinvent the wheel each time and to better guide courts.

On some level, even without considering the kind of AI and AGI once thought the stuff of science fiction, the general question of patenting algorithms is absurd. Algorithms, after all, are essentially a manipulated form of math, far different from true technological expression or physical invention. They are a way to make equations for various functions, including, potentially, equations that would both govern AI and allow AI to learn and evolve in a way not so governed. However ingenious they might be, they are most certainly no more invention than would be the process human cells use to pass along DNA yet discovered by human beings. It's far more discovery than invention, if it's invention at all. Man is now trying to organize mathematics in such a way so as to create intelligence, but it is not inventing that math.

Yet both the USPTO and some in government seem to discard this question for arguments based on mere economic practicality.

Sen. Kamala Harris drilled down on those Supreme Court patent eligibility decisions -- Aliceand Mayo, among them -- in which the court suggested algorithms used in artificial intelligence (AI) might be patentable. She suggested that such a finding would provide incentive for inventors to pursue the kind of AI applications being used in important medical research.

Iancu said that generally speaking, algorithms were human made and the result of human ingenuity rather than the mathematical representations of the discoveries of laws of nature -- E=MC2 for example -- which were not patentable. Algorithms are not set from time immemorial or "absolutes," he said. "They depend on human choices, which he said differs from E=MC2 or the Pythagorean theorem, or from a "pattern" being discovered in nature.

Again, this seems to be a misunderstanding of what an algorithm is. The organization and ordering of a series of math equations is not human invention. It is most certainly human ingenuity, but so was the understanding of the Bernouli Principle, which didn't likewise result in a patent on the math that makes airplanes fly. Allowing companies and researchers to lock up the mathematical concepts for artificial intelligence, whatever the expected incentivizing benefits, is pretty clearly beyond the original purpose and scope of patent law.

But let's say the USPTO and other governments ignore that argument. Keep in mind that algorithms that govern the behavior of AI are mirrors of the intelligent processes occurring in human brains. They are that which will make up the "I" for an AI, essentially making it what it is. Once we reach the level of AGI, its reasonable to consider those algorithms to be the equivalent of the brain function and, by some arguments, consciousness of a mechanical or digital being. Were the USPTO to have its way, that consciousness would be patentable. For those that believe we might one day be the creators of some form of digital life or consciousness, that entire concept is absurd, or at least terribly unethical.

Such cavalier conversations about patenting the math behind potentially true AGI probably require far more thought than asserting they are generally patentable.

Permalink | Comments | Email This Story
Categories: Tech News

Congress And The CASE Of The Proposed Bill That Helps Copyright Trolls

Mon, 2018-04-30 14:12

One of the recurrent themes on Techdirt is that law itself should not become a tool for unlawful abuse. No matter how well-intentioned, if a law provides bad actors with the ability and opportunity to easily chill others' speech or otherwise lawful activity, then it is not a good law.

The CASE Act is an example of a bad law. On the surface it may seem like a good one: one of the reasons people are able to abuse the legal system to shut down those they want to silence is because getting sucked into a lawsuit, even one you might win, can be so ruinously expensive. The CASE Act is intended to provide a more economical way to resolve certain types of copyright infringement disputes, particularly those involving lower monetary value.

But one of the reasons litigation is expensive is because there are number of checks built into it to make sure that before anyone can be forced to pay damages, or be stopped from saying or doing what they were saying or doing, that the party making this demand is actually entitled to. A big problem with the CASE Act is that in exchange for the cost-savings it may offer, it gives up many of those critical checks.

In recognition of the harm removal of these checks would invite, EFF has authored a letter to the House Judiciary Committee raising the alarm on how the CASE Act would only aggravate, rather than remediate, the significant troll problem.

Per the letter, federal courts have been increasingly "reining in [trolling behavior] by demanding specific and reliable evidence of infringement—more than boilerplate allegations—before issuing subpoenas for the identity of an alleged infringer. Some federal courts have also undertaken reviews of copyright troll plaintiffs’ communications with their targets with an eye to preventing coercion and intimidation. These reforms have reduced the financial incentive for the abusive business model of copyright trolling."

But under the CASE Act, these provisions would not apply. Instead

[L]egally unsophisticated defendants—the kind most often targeted by copyright trolls—are likely to find themselves bound by the judgments of a non-judicial body in faraway Washington, D.C., with few if any avenues for appeal. The statutory damages of up to $30,000 proposed in the CASE Act, while less than the $150,000 maximum in federal court, are still a daunting amount for many people in the U.S., more than high enough to coerce Internet users into paying settlements of $2,000–$8,000. Under the Act, a plaintiff engaged in copyright trolling would not need to show any evidence of actual harm in order to recover statutory damages. And unlike in the federal courts, statutory damages could be awarded under the CASE Act even for copyrights that are not registered with the Copyright Office before the alleged infringement began. This means that copyright trolls will be able to threaten home Internet users with life-altering damages—and profit from those threats—based on works with no commercial or artistic value.

And that's not all:

Another troubling provision of the CASE Act would permit the Copyright Office to dispense with even the minimal procedural protections established in the bill for claims of $5,000 or less. These “smaller claims”—which are still at or above the largest allowed in small claims court in 21 states—could be decided by a single “Claims Officer” in a summary procedure on the slimmest of evidence, yet still produce judgments enforceable in federal court with no meaningful right of appeal.


[T] he federal courts are extremely cautious when granting default judgments, and regularly set them aside to avoid injustice to unsophisticated defendants. Nothing in the CASE Act requires the Copyright Office to show the same concern for the rights of defendants. At minimum, a requirement that small claims procedures cannot commence unless defendants affirmatively opt in to those procedures would give the Copyright Office an incentive to ensure that defendants’ procedural and substantive rights are upheld. A truly fair process will be attractive to both copyright holders and those accused of infringement.

The CASE Act appears to reflect an idealized view that the only people who sue other people for copyright infringement are those who have valid claims. But that is not the world we live in. Trolls abound, parasites eager to use the threat of litigation as a club to extract money from innocent victims. And the CASE Act, if passed, would give them a bigger weapon.

It also gives would-be censors additional tools to chill their critics through the use of a new subpoena power administered through the Copyright Office, without sufficient due process built into the system to ensure that these subpoenas are not being used as a means of unjustly stripping speakers of their right to anonymous speech.

The CASE Act also gives the Copyright Office the authority to issue subpoenas for information about Internet subscribers. The safeguards for Internet users’ privacy established in the federal courts will not apply. In fact, the bill doesn’t even require that a copyright holder state a plausible claim of copyright infringement before requesting a subpoena—a basic requirement in federal court.

EFF was joined on this letter by many other lawyers (including me) and experts who have worked to defend innocent people from unjust threats of litigation, in the hope that it can help pressure Congress not to give the green light to more of it.

Permalink | Comments | Email This Story
Categories: Tech News