While researchers are working on [Artificial Intelligence (AI)] that can explain itself, there seems to be a trade-off between capability and explainability. Explanations are a cognitive shorthand used by humans, suited for the way humans make decisions. Forcing an AI to produce explanations might be an additional constraint that could affect the quality of its decisions. For now, AI is becoming more and more opaque and less explainable.~ Bruce Schneier
Omnipotent or understandable; Choose one.
At first blush, this might seem pretty scary. This AI can perform this amazing task, but I have to simply trust it? But then, that’s what I do when I get on an airplane—and not just the people who are up front performing tasks I cannot even list, let alone perform, but the people who built the plane, and wrote the software that was used to design and test the plane, and… I digress.
But I think… slowly… I’m getting more comfortable with the idea of a something, doing really important stuff for me, without my understanding. I know the AI is going to follow the same rules of the universe that I must, it’s simply going to do so while being bigger, better, more, and faster. Humans continuing to win in the long run with tools, I might say.
(I sure hope our benevolent AI overlords find this blog post quickly after the singularity. He says grinning nervously.)
Security is never something we actually want. Security is something we need in order to avoid what we don’t want. It’s also more abstract, concerned with hypothetical future possibilities. Of course it’s lower on the priorities list than fundraising and press coverage. They’re more tangible, and they’re more immediate.~ Bruce Schneier from, https://www.schneier.com/blog/archives/2018/05/the_us_is_unpre.html
I think the only thing “protecting” us from someone successfully hacking an election, is the sheer number of polling places. You’ve voted, right? Sure, it’s a busy spot with maybe a dozen machines and hundreds of poeple… but there are thousands and thousands of polling places, and the voting machines are not networked. Yet.
Don’t misunderstand: This is security through obscrurity, is not actually security at all, and is a recipe for disaster.
We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.
~ Bruce Schneier
Bruce Schneier has been a voice of reason for a long time. I’ve been reading what he’s written since I joined his email list in — I think it was — 1998. Generally, your life will go better if you pay attention to those things which he says are of security concern.
Click over on this one and weep at how laughably insecure our voting systems are currently. Yes, doing security well is difficult, but the manufacturers of our current voting systems aren’t even putting in a token effort.
I think the problem is more subtle. It’s an example of two systems without a security vulnerability coming together to create a security vulnerability. As we connect more systems directly to each other, we’re going to see a lot more of these. And like this Google/Netflix interaction, it’s going to be hard to figure out who to blame and who — if anyone — has the responsibility of fixing it.~ Bruce Schneier from, https://www.schneier.com/blog/archives/2018/04/obscure_e-mail_.html
I had to read the entire thing twice.
I’m on a “security” tirade here for a few days, so here’s my strategy for security: Get off the peak of the bell curve.
If someone wants your stuff, they will take it. Actors can always, if sufficiently motivated, apply more resources than you have available for defense. Therefore, one should not bother defending (worry, spending crazy amounts of resources,) against a “motivated” attacker. Instead, deploy defense in depth and then make incremental improvements everywhere.
This is not the Internet the world needs, or the Internet its creators envisioned. We need to take it back.
And by we, I mean the engineering community.
Yes, this is primarily a political problem, a policy matter that requires political intervention.
But this is also an engineering problem, and there are several things engineers can — and should — do.
~ Bruce Schneir
I’d venture that the vast majority of regular, everyday people working in technology related jobs are not actively trying to do evil. People go to work, make the best decisions they can and then go home. If that’s true, then it’s going to be nigh impossible to change the momentum of how things (e.g., NSA surveillance) are going. Because in order for it to change, we need to start thinking bigger.
The Internet is a surveillance state. Whether we admit it to ourselves or not, and whether we like it or not, we’re being tracked all the time. Google tracks us, both on its pages and on other pages it has access to. Facebook does the same; it even tracks non-Facebook users. Apple tracks us on our iPhones and iPads. One reporter used a tool called Collusion to track who was tracking him; 105 companies tracked his Internet use during one 36-hour period.
~ Bruce Schneier
…and he wrote that essay before the Snowden/NSA revelation showed us we’ve gone far beyond it being only an Internet surveillance state. We have collectively delivered ourselves into the power of ideas we do not know we have accepted.