Cleantech has an enshittification problem

A firebombed cityscape under a smoky red sky. In the foreground is a gigantic brick, most of the length of a city block, with a set of solar panels atop it.  Image: 臺灣古寫真上色 (modified) https://commons.wikimedia.org/wiki/File:Raid_on_Kagi_City_1945.jpg  Grendelkhan (modified) https://commons.wikimedia.org/wiki/File:Ground_mounted_solar_panels.gk.jpg  CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0/deed.enALT

On July 14, I’m giving the closing keynote for the fifteenth HACKERS ON PLANET EARTH, in QUEENS, NY. Happy Bastille Day! On July 20, I’m appearing in CHICAGO at Exile in Bookville.

image

EVs won’t save the planet. Ultimately, the material bill for billions of individual vehicles and the unavoidable geometry of more cars-more traffic-more roads-greater distances-more cars dictate that the future of our cities and planet requires public transit – lots of it.

But no matter how much public transit we install, there’s always going to be some personal vehicles on the road, and not just bikes, ebikes and scooters. Between deliveries, accessibility, and stubbornly low-density regions, there’s going to be a lot of cars, vans and trucks on the road for the foreseeable future, and these should be electric.

Beyond that irreducible minimum of personal vehicles, there’s the fact that individuals can’t install their own public transit system; in places that lack the political will or means to create working transit, EVs are a way for people to significantly reduce their personal emissions.

In policy circles, EV adoption is treated as a logistical and financial issue, so governments have focused on making EVs affordable and increasing the density of charging stations. As an EV owner, I can affirm that affordability and logistics were important concerns when we were shopping for a car.

But there’s a third EV problem that is almost entirely off policy radar: enshittification.

An EV is a rolling computer in a fancy case with a squishy person inside of it. While this can sound scary, there are lots of cool implications for this. For example, your EV could download your local power company’s tariff schedule and preferentially charge itself when the rates are lowest; they could also coordinate with the utility to reduce charging when loads are peaking. You can start them with your phone. Your repair technician can run extensive remote diagnostics on them and help you solve many problems from the road. New features can be delivered over the air.

That’s just for starters, but there’s so much more in the future. After all, the signal virtue of a digital computer is its flexibility. The only computer we know how to make is the Turing complete, universal, Von Neumann machine, which can run every valid program. If a feature is computationally tractable – from automated parallel parking to advanced collision prevention – it can run on a car.

The problem is that this digital flexibility presents a moral hazard to EV manufacturers. EVs are designed to make any kind of unauthorized, owner-selected modification into an IP rights violation (“IP” in this case is “any law that lets me control the conduct of my customers or competitors”):

https://locusmag.com/2020/09/cory-doctorow-ip/

EVs are also designed so that the manufacturer can unilaterally exert control over them or alter their operation. EVs – even more than conventional vehicles – are designed to be remotely killswitched in order to help manufacturers and dealers pressure people into paying their car notes on time:

https://pluralistic.net/2023/07/24/rent-to-pwn/#kitt-is-a-demon

Manufacturers can reach into your car and change how much of your battery you can access:

https://pluralistic.net/2023/07/28/edison-not-tesla/#demon-haunted-world

They can lock your car and have it send its location to a repo man, then greet him by blinking its lights, honking its horn, and pulling out of its parking space:

https://tiremeetsroad.com/2021/03/18/tesla-allegedly-remotely-unlocks-model-3-owners-car-uses-smart-summon-to-help-repo-agent/

And of course, they can detect when you’ve asked independent mechanic to service your car and then punish you by degrading its functionality:

https://www.repairerdrivennews.com/2024/06/26/two-of-eight-claims-in-tesla-anti-trust-lawsuit-will-move-forward/

This is “twiddling” – unilaterally and irreversibly altering the functionality of a product or service, secure in the knowledge that IP law will prevent anyone from twiddling back by restoring the gadget to a preferred configuration:

https://pluralistic.net/2023/02/19/twiddler/

The thing is, for an EV, twiddling is the best case scenario. As bad as it is for the company that made your EV to change how it works whenever they feel like picking your pocket, that’s infinitely preferable to the manufacturer going bankrupt and bricking your car.

That’s what just happened to owners of Fisker EVs, cars that cost $40-70k. Cars are long-term purchases. An EV should last 12-20 years, or even longer if you pay to swap the battery pack. Fisker was founded in 2016 and shipped its first Ocean SUV in 2023. The company is now bankrupt:

https://insideevs.com/news/723669/fisker-inc-bankruptcy-chapter-11-official/

Fisker called its vehicles “software-based cars” and they weren’t kidding. Without continuous software updates and server access, those Fisker Ocean SUVs are turning into bricks. What’s more, the company designed the car from the ground up to make any kind of independent service and support into a felony, by wrapping the whole thing in overlapping layers of IP. That means that no one can step in with a module that jailbreaks the Fisker and drops in an alternative firmware that will keep the fleet rolling.

This is the third EV risk – not just finance, not just charger infrastructure, but the possibility that any whizzy, cool new EV company will go bust and brick your $70k cleantech investment, irreversibly transforming your car into 5,500 lb worth of e-waste.

This confers a huge advantage onto the big automakers like VW, Kia, Ford, etc. Tesla gets a pass, too, because it achieved critical mass before people started to wise up to the risk of twiddling and bricking. If you’re making a serious investment in a product you expect to use for 20 years, are you really gonna buy it from a two-year old startup with six months’ capital in the bank?

The incumbency advantage here means that the big automakers won’t have any reason to sink a lot of money into R&D, because they won’t have to worry about hungry startups with cool new ideas eating their lunches. They can maintain the cozy cartel that has seen cars stagnate for decades, with the majority of “innovation” taking the form of shitty, extractive and ill-starred ideas like touchscreen controls and an accelerator pedal that you have to rent by the month:

https://www.theverge.com/2022/11/23/23474969/mercedes-car-subscription-faster-acceleration-feature-price

Keep reading

Microsoft pinky swears that THIS TIME they’ll make security a priority

A frame from a Peanuts animation, depicting Lucy yanking the football away from Charlie Brown, who is somersaulting through the sky. It has been altered. Lucy's head has been replaced with Microsoft's Clippy. Charlie Brown's head has been replaced with a 19th century caricature of a grinning Uncle Sam. The sky has been replaced with a 'code waterfall' effect as seen in the Wachowskis' 'Matrix' movies.ALT

One June 20, I’m live onstage in LOS ANGELES for a recording of the GO FACT YOURSELF podcast. On June 21, I’m doing an ONLINE READING for the LOCUS AWARDS at 16hPT. On June 22, I’ll be in OAKLAND, CA for a panel and a keynote at the LOCUS AWARDS.

image

As the old saying goes, “When someone tells you who they are and you get fooled again, shame on you.” That goes double for Microsoft, especially when it comes to security promises.

Microsoft is, was, always has been, and always will be a rotten company. At every turn, throughout their history, they have learned the wrong lessons, over and over again.

That starts from the very earliest days, when the company was still called “Micro-Soft.” Young Bill Gates was given a sweetheart deal to supply the operating system for IBM’s PC, thanks to his mother’s connection. The nepo-baby enlisted his pal, Paul Allen (whom he’d later rip off for billions) and together, they bought someone else’s OS (and took credit for creating it – AKA, the “Musk gambit”).

Microsoft then proceeded to make a fortune by monopolizing the OS market through illegal, collusive arrangements with the PC clone industry – an industry that only existed because they could source third-party PC ROMs from Phoenix:

https://www.eff.org/deeplinks/2019/08/ibm-pc-compatible-how-adversarial-interoperability-saved-pcs-monopolization

Bill Gates didn’t become one of the richest people on earth simply by emerging from a lucky orifice; he also owed his success to vigorous antitrust enforcement. The IBM PC was the company’s first major initiative after it was targeted by the DOJ for a 12-year antitrust enforcement action. IBM tapped its vast monopoly profits to fight the DOJ, spending more on outside counsel to fight the DOJ antitrust division than the DOJ spent on all its antitrust lawyers, every year, for 12 years.

IBM’s delaying tactic paid off. When Reagan took the White House, he let IBM off the hook. But the company was still seriously scarred by its ordeal, and when the PC project kicked off, the company kept the OS separate from the hardware (one of the DOJ’s major issues with IBM’s previous behavior was its vertical monopoly on hardware and software). IBM didn’t hire Gates and Allen to provide it with DOS because it was incapable of writing a PC operating system: they did it to keep the DOJ from kicking down their door again.

The post-antitrust, gunshy IBM kept delivering dividends for Microsoft. When IBM turned a blind eye to the cloned PC-ROM and allowed companies like Compaq, Dell and Gateway to compete directly with Big Blue, this produced a whole cohort of customers for Microsoft – customers Microsoft could play off on each other, ensuring that every PC sold generated income for Microsoft, creating a wide moat around the OS business that kept other OS vendors out of the market. Why invest in making an OS when every hardware company already had an exclusive arrangement with Microsoft?

The IBM PC story teaches us two things: stronger antitrust enforcement spurs innovation and opens markets for scrappy startups to grow to big, important firms; as do weaker IP protections.

Microsoft learned the opposite: monopolies are wildly profitable; expansive IP protects monopolies; you can violate antitrust laws so long as you have enough monopoly profits rolling in to outspend the government until a Republican bootlicker takes the White House (Microsoft’s antitrust ordeal ended after GW Bush stole the 2000 election and dropped the charges against them). Microsoft embodies the idea that you either die a rebel hero or live long enough to become the evil emperor you dethroned.

From the first, Microsoft has pursued three goals:

  1. Get too big to fail;
  2. Get too big to jail;
  3. Get too big to care.

It has succeeded on all three counts. Much of Microsoft’s enduring power comes from succeeded IBM as the company that mediocre IT managers can safely buy from without being blamed for the poor quality of Microsoft’s products: “Nobody ever got fired for buying Microsoft” is 2024’s answer to “Nobody ever got fired for buying IBM.”

Microsoft’s secret sauce is impunity. The PC companies that bundle Windows with their hardware are held blameless for the glaring defects in Windows. The IT managers who buy company-wide Windows licenses are likewise insulated from the rage of the workers who have to use Windows and other Microsoft products.

Microsoft doesn’t have to care if you hate it because, for the most part, it’s not selling to you. It’s selling to a few decision-makers who can be wined and dined and flattered. And since we all have to use its products, developers have to target its platform if they want to sell us their software.

This rarified position has afforded Microsoft enormous freedom to roll out harebrained “features” that made things briefly attractive for some group of developers it was hoping to tempt into its sticky-trap. Remember when it put a Turing-complete scripting environment into Microsoft Office and unleashed a plague of macro viruses that wiped out years worth of work for entire businesses?

https://web.archive.org/web/20060325224147/http://www3.ca.com/securityadvisor/newsinfo/collateral.aspx?cid=33338

It wasn’t just Office; Microsoft’s operating systems have harbored festering swamps of godawful defects that were weaponized by trolls, script kiddies, and nation-states:

https://en.wikipedia.org/wiki/EternalBlue

Microsoft blamed everyone except themselves for these defects, claiming that their poor code quality was no worse than others, insisting that the bulging arsenal of Windows-specific malware was the result of being the juiciest target and thus the subject of the most malicious attention.

Keep reading

Palantir’s NHS-stealing Big Lie

A haunted, ruined hospital building. A sign hangs askew over the entrance with the NHS logo over the Palantir logo. Beneath it, a cutaway silhouette reveals a blood-spattered, scalpel-wielding surgeon with a Palantir logo over his breast, about to slice into a frightened patient with an NHS logo over his breast. Looming over the scene are the eyes of Peter Thiel, bloodshot and sinister.  Image: Gage Skidmore (modified) https://commons.m.wikimedia.org/wiki/File:Peter_Thiel_(51876933345).jpg  CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/deed.enALT

I’m on tour with my new, nationally bestselling novel The Bezzle! Catch me in TUCSON (Mar 9-10), then SAN FRANCISCO (Mar 13), Anaheim, and more!

A yellow rectangle. On the left, in blue, are the words 'Cory Doctorow.' On the right, in black, is 'The Bezzle.' Between them is the motif from the cover of *The Bezzle*: an escheresque impossible triangle. The center of the triangle is a barred, smaller triangle that imprisons a silhouetted male figure in a suit. Two other male silhouettes in suits run alongside the top edges of the triangle.ALT

Capitalism’s Big Lie in four words: “There is no alternative.” Looters use this lie for cover, insisting that they’re hard-nosed grownups living in the reality of human nature, incentives, and facts (which don’t care about your feelings).

The point of “there is no alternative” is to extinguish the innovative imagination. “There is no alternative” is really “stop trying to think of alternatives, dammit.” But there are always alternatives, and the only reason to demand that they be excluded from consideration is that these alternatives are manifestly superior to the looter’s supposed inevitability.

Right now, there’s an attempt underway to loot the NHS, the UK’s single most beloved institution. The NHS has been under sustained assault for decades – budget cuts, overt and stealth privatisation, etc. But one of its crown jewels has been stubbournly resistant to being auctioned off: patient data. Not that HMG hasn’t repeatedly tried to flog patient data – it’s just that the public won’t stand for it:

https://www.theguardian.com/society/2023/nov/21/nhs-data-platform-may-be-undermined-by-lack-of-public-trust-warn-campaigners

Patients – quite reasonably – do not trust the private sector to handle their sensitive medical records.

Now, this presents a real conundrum, because NHS patient data, taken as a whole, holds untold medical insights. The UK is a large and diverse country and those records in aggregate can help researchers understand the efficacy of various medicines and other interventions. Leaving that data inert and unanalysed will cost lives: in the UK, and all over the world.

For years, the stock answer to “how do we do science on NHS records without violating patient privacy?” has been “just anonymise the data.” The claim is that if you replace patient names with random numbers, you can release the data to research partners without compromising patient privacy, because no one will be able to turn those numbers back into names.

It would be great if this were true, but it isn’t. In theory and in practice, it is surprisingly easy to “re-identify” individuals in anonymous data-sets. To take an obvious example: we know which two dates former PM Tony Blair was given a specific treatment for a cardiac emergency, because this happened while he was in office. We also know Blair’s date of birth. Check any trove of NHS data that records a person who matches those three facts and you’ve found Tony Blair – and all the private data contained alongside those public facts is now in the public domain, forever.

Not everyone has Tony Blair’s reidentification hooks, but everyone has data in some kind of database, and those databases are continually being breached, leaked or intentionally released. A breach from a taxi service like Addison-Lee or Uber, or from Transport for London, will reveal the journeys that immediately preceded each prescription at each clinic or hospital in an “anonymous” NHS dataset, which can then be cross-referenced to databases of home addresses and workplaces. In an eyeblink, millions of Britons’ records of receiving treatment for STIs or cancer can be connected with named individuals – again, forever.

Re-identification attacks are now considered inevitable; security researchers have made a sport out of seeing how little additional information they need to re-identify individuals in anonymised data-sets. A surprising number of people in any large data-set can be re-identified based on a single characteristic in the data-set.

Given all this, anonymous NHS data releases should have been ruled out years ago. Instead, NHS records are to be handed over to the US military surveillance company Palantir, a notorious human-rights abuser and supplier to the world’s most disgusting authoritarian regimes. Palantir – founded by the far-right Trump bagman Peter Thiel – takes its name from the evil wizard Sauron’s all-seeing orb in Lord of the Rings (“Sauron, are we the baddies?”):

https://pluralistic.net/2022/10/01/the-palantir-will-see-you-now/#public-private-partnership

Keep reading

“Open” “AI” isn’t

Tux the Penguin, posed on a Matrix credit-sequence 'code waterfall.' His eyes have been replaced with the menacing red eyes of HAL 9000 from Kubrick's '2001: A Space Odyssey.'   Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg  CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.enALT

Tomorrow (19 Aug), I’m appearing at the San Diego Union-Tribune Festival of Books. I’m on a 2:30PM panel called "Return From Retirement,” followed by a signing:

https://www.sandiegouniontribune.com/festivalofbooks

image

The crybabies who freak out about The Communist Manifesto appearing on university curriculum clearly never read it – chapter one is basically a long hymn to capitalism’s flexibility and inventiveness, its ability to change form and adapt itself to everything the world throws at it and come out on top:

https://www.marxists.org/archive/marx/works/1848/communist-manifesto/ch01.htm#007

Today, leftists signal this protean capacity of capital with the -washing suffix: greenwashing, genderwashing, queerwashing, wokewashing – all the ways capital cloaks itself in liberatory, progressive values, while still serving as a force for extraction, exploitation, and political corruption.

A smart capitalist is someone who, sensing the outrage at a world run by 150 old white guys in boardrooms, proposes replacing half of them with women, queers, and people of color. This is a superficial maneuver, sure, but it’s an incredibly effective one.

In “Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI,” a new working paper, Meredith Whittaker, David Gray Widder and Sarah B Myers document a new kind of -washing: openwashing:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543807

Openwashing is the trick that large “AI” companies use to evade regulation and neutralizing critics, by casting themselves as forces of ethical capitalism, committed to the virtue of openness. No one should be surprised to learn that the products of the “open” wing of an industry whose products are neither “artificial,” nor “intelligent,” are also not “open.” Every word AI huxters say is a lie; including “and,” and “the.”

Keep reading

Orphaned neurological implants

image

The startup world’s dirty not-so-secret is that most startups fail. Startups are risky ventures and their investors know it, so they cast a wide net, placing lots of bets on lots of startups and folding the ones that don’t show promise, which sucks for the company employees, but also for the users who depend on the company’s products.

You know what this is like: you sink a bunch of time into familiarizing yourself with a new product, you spend money on accessories for it, you lock your data into it, you integrate it into your life, and then, one morning — poof! All gone.

Now, there are ways that startups could mitigate this risk for their customers: they could publish their source code under a free/open license so that it could be maintained by third parties, they could refuse to patent their technology, or dedicate their patents to an open patent pool, etc.

All of this might tempt more people to try their product or service, because the customers for digital products are increasingly savvy, having learned hard lessons when the tools they previously depended were orphaned by startups whose investors pulled the plug.

But very few startups do this, because their investors won’t let them. That brings me to the other dirty not-so-secret of the startup world: when a startup fails, investors try to make back some of their losses by selling the company’s assets to any buyer, no matter how sleazy.

A startup’s physical assets are typically minimal: used ergonomic chairs and laptops don’t exactly hold their value, and there’s not much of a market for t-shirts and stickers advertising dead businesses.

Wily investors are more interested in intangible assets: user data and patents, which are sold off to the highest bidder. That bidder is almost certainly a bottom-feeding scumbag, because the best way to maximize the value of user data is to abuse it, and the best way to maximize a failed business patent is to use it for patent trolling.

If you let your investors talk you into patenting your cool idea, there’s a minuscule chance that the patent will be the core of a profitable business — and a much larger chance that it end up in a troll’s portfolio. Real businesses make things that people want. Patent trolls are parasites, “businesses” whose only products are legal threats and lawsuits, which they use to bleed out real businesses.

The looming threat of dissolution gives rise to a third startup dirty secret: faced with a choice of growth or sustainability, companies choose growth. There’s no point in investing in sustainability — good information security, robust systems, good HR — if it costs you the runway you need to achieve liftoff.

Your excellent processes won’t help you when your investors shut you down, so a “lean” startup has only the minimum viable resiliency and robustness. If you do manage to attain liftoff — or get sold to a Big Tech firm — then you can fix all that stuff.

And if the far more likely outcome — failure — comes to pass, then all the liabilities you’ve created with your indifferent security and resiliency will be someone else’s problem. Limited liability, baby!

Combine these three dirty secrets and it’s hard to understand why anyone would use a startup’s product, knowing that it will collect as much data as it can, secure it only indifferently, and sell that data on to sleazy data-brokers. Meanwhile, the product you buy and rely upon will probably become a radioactive wasteland of closed source and patent trolling, with so much technology and policy debt that no one can afford to take responsibility for it.

Think of Cloudpets, a viral toy sensation whose manufacturer, Spiral Toys, had a successful IPO — and then immediately started hemorrhaging money and shedding employees. Cloudpets were plush toys that you connected to your home wifi; they had built-in mics that kids could activate to record a voice-memo, which was transmitted to their parents’ phones by means of an app, and parents could send messages back via the toys’ speakers.

But Spiral Toys never bothered to secure those voice memos or the system for making new ones. The entire database of all recordings by kids and parents sat on an unencrypted, publicly accessible server for years. It was so indifferently monitored that no one noticed that hackers had downloaded the database multiple times, leaving behind threats to dump it unless they were paid ransoms.

By the time this came to light, Spiral Toys’ share price was down more than 99% and no one was answering any of its email addresses or phones. The data — 2.2 million intimate, personal communications between small children and their parents — just hung out there, free for the taking:

https://www.troyhunt.com/data-from-connected-cloudpets-teddy-bears-leaked-and-ransomed-exposing-kids-voice-messages/

Data leakage is irreversible. Those 2,200,000 voice memos are now immortal, child-ghosts that will haunt the internet forever — after the parents are dead, after the kids are dead.

Data breaches are permanent. Filling a startup’s sandcastle with your important data is a high-risk bet that the company will attain liftoff before it breaches.

It’s not just your data that goes away when a startup folds — it’s also the money you invest in its hardware and systems, as well as the cost of replacing devices that get bricked when a company goes bust. That’s bad enough when it’s a home security device:

https://gizmodo.com/spectrum-kills-home-security-business-refuses-refunds-1840931761

But what about when the device is inside your body?

Earlier this year, many people with Argus optical implants — which allow blind people to see — lost their vision when the manufacturer, Second Sight, went bust:

https://spectrum.ieee.org/bionic-eye-obsolete

Nano Precision Medical, the company’s new owners, aren’t interested in maintaining the implants, so that’s the end of the road for everyone with one of Argus’s “bionic” eyes. The $150,000 per eye that those people paid is gone, and they have failing hardware permanently wired into their nervous systems.

Having a bricked eye implant doesn’t just rob you of your sight — many Argus users experience crippling vertigo and other side effects of nonfunctional implants. The company has promised to “do our best to provide virtual support” to people whose Argus implants fail — but no more parts and no more patches.

Second Sight wasn’t the first neural implant vendor to abandon its customers, nor was it the last. Last week, Liam Drew told the stories of other neural abandonware in “Abandoned: the human cost of neurotechnology failure” in Nature:

https://www.nature.com/immersive/d41586-022-03810-5/index.html

Among that abandonware: ATI’s neural implant for reducing cluster headaches, Nuvectra’s spinal-cord stimulator for chronic pain, Freehand’s paralysis bypass for hands and arms, and others. People with these implants are left in a precarious limbo, reliant on reverse-engineering and a dwindling supply of parts for maintenance.

Drew asked his expert subjects what is to be done about this. The least plausible answer is to let the market work its magic: “long-term support on the commercial side would be a competitive advantage.” In other words, wait for companies to realize that promising a durable product will attract customers, so that the other companies go out of business.

A better answer: standardization. “If components were common across devices, one manufacturer might be able to step in and offer spares when another goes under.” 86% of surgeons who implant neurostimulators back this approach.

But the best answer comes from Hunter Peckham, co-developer of Freehand and a Case Western biomedical engineer: open hardware. “Peckham plans to make the design specifications and supporting documentation of new implantable technologies developed by his team freely available. ‘Then people can just cut and paste.’”

This isn’t just the best answer, it’s the only one. There’s no ethical case for permanently attaching computers to people’s nervous systems without giving them the absolute, irrevocable right to nominate who maintains those computers and how.

This is the case that Christian Dameff, Jeff Tully and I made at our Defcon panel this year: “Why Patients Should Hack Medtech.” Patients know things about their care and their needs that no one else can ever fully appreciate; they are the best people to have the final say over med-tech decisions:

https://www.youtube.com/watch?v=_i1BF5YGS0w

This is the principle that animates Colorado’s HB22–1031, the “Consumer Right To Repair Powered Wheelchairs Act,” landmark Right to Repair legislation that was signed into law last year:

https://www.eff.org/deeplinks/2022/06/when-drm-comes-your-wheelchair

Opponents of this proposal will say that it will discourage investment in “innovation” in neurological implants. They may well be right: the kinds of private investors who hedge their bets on high-risk ventures by minimizing security and resilience and exploiting patents and user-data might well be scared off of investment by a requirement to make the technology open.

It may be that showboating billionaire dilettantes will be unwilling to continue to pour money into neural implant companies if they are required to put the lives of the people who use their products ahead of their own profits.

It may be that the only humane, sustainable way to develop neural implants is to publicly fund that research and development, with the condition that the work products be standard, open, and replicable.

Image:
Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svg

CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en


[Image ID: The staring eye of HAL9000 from 2001: A Space Odyssey. Centered in it is a medieval anatomical engraving of the human nervous system, limned in a blue halo.]

Here’s 0ctane and his sub-$200 #OSHW, blob-free, trustworthy computer #Defcon25 -  Grats on an excellent Defcon debut!

image
image
image

Modern computing platforms offer more freedom than ever before. The rise of Free and Open Source Software has led to more secure and heavily scrutinized cryptographic solutions. However, below the surface of open source operating systems, strictly closed source firmware along with device driver blobs and closed system architecture prevent users from examining, understanding, and trusting the systems where they run their private computations. Embedded technologies like Intel Management Engine pose significant threats when, not if, they get exploited. Advanced attackers in possession of firmware signing keys, and even potential access to chip fabrication, could wreak untold havoc on cryptographic devices we rely on.


After surveying all-too-possible low level attacks on critical systems, we will introduce an alternative open source solution to peace-of-mind cryptography and private computing. By using programmable logic chips, called Field Programmable Gate Arrays, this device is more open source than any common personal computing system to date. No blobs, no hidden firmware features, and no secret closed source processors. This concept isn’t “unhacakable”, rather we believe it to be the most fixable; this is what users and hackers should ultimately be fighting for.

0ctane
0ctane is a longtime hobbyist hacker, with experience primarily in UNIX systems and hardware. Holding no official training or technical employment, 0ctane spends most of their free time building and restoring older computer systems, hanging out at surplus stores and tracking down X86 alternatives with an occasional dabbling in OSX and 802.11 exploitation. Other interests include SDR and RF exploration, networking, cryptography, computer history, distributed computing…really anything that sounds cool that I happen to stumble on at 3am.

https://www.defcon.org/html/defcon-25/dc-25-speakers.html

Tangled wires, Freegeek, Portland, Oregon, USA on Flickr.
Thursday, January 15, 2015 - 5:30pm to 7:00pm

You are increasingly made of computers (pacemakers, hearing aids, prostheses), you are increasingly inhabiting an internet of things (cars, planes, buildings), and that’s potentially pretty bad news. The model for regulating computers is to insist that they be somehow constrained so that they can’t do undesirable things, but we don’t actually know how to do this - the closest we come is making computers that have supervisory processes that spy on you, that you can’t shut down, and that every conceivable kind of bad guy could use to come after you.

Type:  Lecture
Audience: Open to the Public
Building: Nador u. 9, Faculty Tower
Room: Auditorium
Sensitive Data sign, Freegeek, Portland, Oregon, USA on Flickr.