How Women in AI are Changing the Face of Tech and Campaigns

“Life doesn’t always give us what we deserve, but rather, what we demand. And so you must continue to push harder than any other person in the room.” Those words from Wadi Ben-Hirki, a young feminist activist from Nigeria, are a good reminder that gender equity is still a problem in nearly all fields. The tech field is no exception, and this has been obvious (and repeatedly stated) for decades. But the emergency of AI as a special and distinct field gives us the opportunity to highlight what women in technology have brought to the table.

Gender inclusivity and women’s empowerment are not just advertising tags or feel-good slogans. Call it essentialism or just empirics, but some notable women in AI are asking foundational questions and offering binary-splitting solutions. When the nonprofit consortium “Women in AI”  recently met for a conference at Trinity College in Dublin, Ireland, the purpose of the conference wasn’t some kind of inward identity gazing, but instead an outward, socially relevant gaze on “ethically-driven AI design.” One of the organizers was Alessandra Sala whose “research at Nokia Bell Labs focuses on distributed algorithms and complexity analysis with an emphasis on graph algorithms and privacy issues in large-scale networks.”

An ongoing conversation about gender in technology, and AI in particular, is also critical for growth and self-reflection in the industry. Consider Jane Crofts, founder of Data to the People, a global data literacy nonprofit. One of her curiosity-spawned projects is Databilities, “an evidence-based data literacy competency framework that was launched at the 2018 United Nations World Data Forum.”

And consider also Abeba Birhane, a graduate student in cognitive science at University College Dublin, in Ireland. Birhane also works for the Irish software research center Lero. She specializes in complexity science and the philosophy of technology. Birhane has emerged as a powerful voice of anti-Cartesianism. What does that mean? Rene Descartes (hence “Cartesianism”) is the French philosopher responsible for constructing a “subject-object” dichotomy that is at the root of a lot of our assumptions about humans (the subject) being able to control and predict the world (the object).

Birhane provocatively names the audacious hypotheses that emerge from subject-object thinking: our assumption that we can “predict people’s behaviour with precision . . .  tell whether someone is going to a commit crime before they do . . . The quest for absolute certainty has been at the top of Western science’s agenda,” she writes, and AI research similarly strives “for generalizability and predictability.” But reality is never predictable. It’s infinitely complex.

This is a shattering of the dominant AI narrative. It opens the door for a more nuanced and patient approach to reaching people—perhaps analogous to “deep canvassing,” the new (and technology and ethics-driven) campaign paradigm that is made up of ongoing conversations between canvassers and potential voters. It’s a process that allows complexity and human infinitude to drive the conversation—people change their minds, or solidify their positions for their reasons in dialogue with others. This endeavor is aided by the tech developed by Open Field, a project of two more notable tech women, Emily Del Beccaro and Ari Trujillo Wesler. Because more personalized and spontaneous campaigning requires the ability to fill in information while out canvassing rather than collecting and integrating it later, Open Field offers services like real time analytics dashboards and the ability to quickly customize scripts, among other things.

Big Data Can Track Students. Can It Improve Education?

From the EdSurge news page, we learn that colleges and universities are discovering the benefits of big data. It’s no secret that colleges and universities have to do a lot more with less these days—or face closure. Whether because of high administrative costs, a declining number of applicants (the result of widespread economic insecurity that seems immune to the healing effects of shareholder economic growth), and a general reluctance to take on student debt, if higher educational institutions are to survive in great enough numbers to actually offer students meaningful post-secondary choices, cost-efficiency and good student experiences are essential.

So the big data companies were apparently out in force at a recent technology-in-higher-education trade show. There were a startling 275+ companies there, promoting all kinds of data-driven solutions to logistical, resource, recruiting, and management questions.

And it’s all about tracking the student, the EdSurge article says: “If colleges actually bought all the tools sold here, just about every move made by students and professors in physical and virtual campuses would be tracked and analyzed in the name of efficiency.” But it doesn’t stop there. The ultimate vision, which should surprise no one who is familiar with data entrepreneurialism, is to create and fill those student profiles before the students actually move to the college, as well as continue the tracking after they graduate.

The pitch is that the tracking will help improve retention, student experience, and graduation rates. Data collection can track student progress in class and on major projects. Data collection can track student use of buildings to maximize building hours, and even spot students who need financial or educational help. Colleges can also use data to determine the best performing professors.

All of this happens at relatively low cost compared to the kind of money-versus-output that higher education institutions get when they hire consultants that don’t use such data.

For those concerned about privacy, the companies typically argue that students have the ability to opt out, so there’s no invasion of privacy. But readers may also be sympathetic to a Wired piece by Brian Barret written last August, in the context of Apple’s practice of sharing voice assistant recordings to contractors, and the offer of the “opt-out” by companies flagged for getting too personal or chatty with people’s data. Barret points out that data collection under the opt-out is default; it happens automatically unless you proactively stop it, where an opt-in puts the consumer “in control from the start.”

Nevertheless, as Nicole Gorman reports in Education World, CEOs of data companies see the collection of that student data as a key to increasing student achievement “despite frequent controversy over privacy and security concerns.” And that’s fair enough. In particular, tracking data around student services, student buildings, hours in the library or union, cost of things like books and supplies, all these things above and beyond actual academic performance, could certainly be used to increase the overall serviceability of campuses to students—and think especially of special-needs students, first-generation college attendees, and the like.

High-Tech Treehouse Roundup

Even for the most austere person who doesn’t need lavish, exotic, or elaborate surroundings, there is something fascinating about treehouses. Not the kind that are every kid’s dream and every parent’s bad trip. If you were a kid who grew up in neighborhoods with lots of yard flora, including trees, you may have dreamed of treehouses with television (and later internet), lavish surroundings, and maybe even full automation. What you usually ended up installing in your trees was a glorified wooden box that didn’t offer as much to do, or as much protection from the elements, as you idealized.

But Google “high-tech treehouses” and you’ll go down a proverbial rabbit hole–a rabbit hole of trees. Far from being exclusively high-tech, these are simultaneously high- and low- tech beauties, utilizing a little bit of permaculture and a lot of technology. They’re also a cousin of the well-designed tiny house. In this instance, the tree structures reflect the tiny house ethos, the idea that you can pack a lot of beauty in a simple and small design—plus you can do it in a tree.

In a way, it’s a return to our roots (pardon the pun). Humans may have lived in trees until about 40,000 years ago. There’s a theory that these modified “nests” were inherited by humans from prehistoric great apes. Now, they are shangri-la for the creative set, often brilliant combinations of modest and enriched, with some of the most creative architecture we’ve ever seen. And these aren’t for kids—or at least not just for kids (some of them have great kid-spaces). Here are some notable places to research the potential of tree platform structures:

A 3D-Scanned Japanese Tree-Mansion

Learn more about the Kusukusu treehouse

Kusukusu was built by Japanese professional treehouse creator Takashi Kobayashi, who has built over 120 treehouses over the past two decades. Kobayashi’s team first 3D-scanned hundreds of points on the trees in order to create a steel trellis meant to “thread” through the entire wide lot of them. Once a skeleton was built, the team used steel and wood, filled in spaces with glass, added elaborate stairways to connect different levels and paths of the tree, then garnished the whole thing with beautiful wooden decks, complete with tables and chairs. The finished product is a multi-tree structure that reminds us more of a cruise ship than a treehouse.

A Retro-Futuristic Treetop Escape

Portola Valley in California is home to this structure, and there’s a great podcast episode about it here. Described as a “kid-friendly hi tech treehouse” with a “midcentury modern aesthetic,” you can see a shorter video of it on the same page. Natural formed posts supplement the otherwise minimal support of the tree. There’s a large bi-level living area inside. It’s a family house for sure. Metal-roofed, good polished wood and paneling, a covered patio/balcony, all extremely well-lit and surrounded by taller trees, this is the stuff that tree-dreams are made of.

Volcano Cone Houses and Endless Forms and Ideas

Insider.com has a list of 35 “drop-dead gorgeous” tree structures. One, Bisate Lodge in Rwanda, was built in the eroded cone of a long inactive volcano. Its structure is snake-like, connecting six independent treehouses that look kind of like nests. Each has its own fireplace and suite. In France, there’s a tree castle with a jacuzzi. In Atlanta, an urban garden AirBnB treehouse.

If nothing else, these beauties demonstrate that, in the event of mass flooding or other apocalyptic scenarios that preclude ground living, there will be some places we can go, provided we don’t also run out of trees. 

Smithsonian Opens the Digital Doors

When we’re working in digital, one of the hairiest issues can be finding appropriate imagery and other resources to build on without stepping on a creator’s rights or paying through the nose. That’s why you see so many blogs and social media posts with boring stock imagery!

There is no excuse now!

Today, the Smithsonian has announced a massive new Open Access initiative years in the making. Through the new portal si.edu/OpenAccess, the museum has revealed 2.8 million images from it’s collection for anyone to use, modify, and share. This massive digital access project includes photos of artwork as well 3-d imagery and nearly 200 hundred years of data archives from all 19 Smithsonian museums, nine research centers, libraries, archives and the National Zoo.

From the Smithsonian blog:

Nearly 200 other institutions worldwide—including Amsterdam’s Rijksmuseum, New York’s Metropolitan Museum of Art and the Art Institute of Chicago—have made similar moves to digitize and liberate their masterworks in recent years. But the scale of the Smithsonian’s release is “unprecedented” in both depth and breadth, says Simon Tanner, an expert in digital cultural heritage at King’s College London.

Researchers and academics praised the news. “I don’t think American citizens fully comprehend what’s happening here,” wrote Clare Fieseler of Georgetown University on Twitter. “Today the Smithsonian begins the journey of putting the 4 billion objects we collectively own in our national “basement” on the web. FOR EVERYONE.”

“Big news!'” wrote Andrew Lih, Met Museum Wikimedia strategist. “Smithsonian adopts a new Open Access policy, allowing free reuse of their content. The Wikipedia community and Wikimedia DC have worked for years to help this move forward.”

From the Smithsonian:

Listed under a Creative Commons Zero (CC0) license, the 2.8 million images in the new database are now liberated from all restrictions, copyright or otherwise, enabling anyone with a decent Internet connection to build on them as raw materials—and ultimately participate in their evolution.

We hope you’re as excited as we are!

Space Debris and Cooperation

Space debris is a big problem. There’s a whole lot of it orbiting our planet, fragments and parts and stuff crashing into one another to produce even more, from dangerous large bits to even more dangerous small bits that are harder to detect. It complicates spaceflight and space maintenance work. Significantly, it presents a kind of “tragedy of the commons” scenario because of the difficulty of making international cooperation actually work. 

The U.S. is taking it seriously and will do more in 2020. Politico reports: “Agencies will begin rewriting regulations early next year to match the updated government guidelines released last month that limit the creation of garbage in orbit . . .” But experts say this has to include an effort to spread best practices globally, since other privileged nations are going into space and since the private sector will probably generate the predominant space actors within a few decades. 

Not everyone agrees with this. Adam Routh takes a somewhat cynical route to space debris governance in November, pointing out that “[d]ecades of stalled efforts have proven multilateral space agreements are simply too difficult to develop” and suggesting that “very limited” agreements, and more bilateral agreements, are better than trying to make a larger structural regime work. He’s right that there have been a lot of stumbles, but it would be a hasty generalization to say that means we shouldn’t keep trying. Bilateral and multilateral approaches aren’t mutually exclusive. He’s also right that “economic interests can be persuasive in international law development,” and so we have to figure out how to incentivize that cooperation.

Stronger international norms are also necessary to deal with actors who are willing to externalize the costs of their military and commercial objectives. Consider India’s anti-satellite technology, which deploys kinetic force to destroy enemy satellites (while India assures the world it has no such enemies presently). “Mission Shakti,” as the technology is called, has “generated hundreds of pieces of debris . . . approximately 50 fragments still remain in orbit. Every one of these fragments constitutes an individual space object over which India retains exclusive jurisdiction and control. These fragments are at great risk of colliding with each other, and possibly other satellites, which would result in the generation of even more debris.” Seems like a very high price for the world to pay for a weapon there are no enemies for. The technology seems to violate at least two current treaties, and the risk of future damage is high.

Condemnations of irresponsibility may feel good, but the presence of actual global working groups, and practical agreements, would go a long way towards convincing people—the public, policymakers—that cooperation works. 

It’s no surprise that voices from the military (a bottom-line, efficacy-oriented culture) support large-scale international cooperation on debris: Recently, Lt. Gen. Susan Helms, commander of the U.S. Strategic Command’s Joint Functional Component Command for Space, told the media that the United States must work “with other nations and the private sector” to track debris. Currently, about 22,000 pieces are tracked, and that’s clearly not enough, so Helms says “We must partner with other nations and enterprises to achieve mutually beneficial goals.”

Space debris is an extremely negative externality, that should have been contemplated but was not. Those problems nearly always require cooperation to solve, a willingness to forsake short-term advantage-seeking in return for long-term security.

Cryptocurrency Beyond Good and Evil

In Beyond Good and Evil, Friedrich Nietzsche lists “objection, evasion [and] joyous distrust” as signs of health. By that standard, and perhaps that standard only, cryptocurrency’s public image is healthy. I mean, if you do a search for crypto stories over the past month or two, a third of what pops up are arrests for fraud and revelations of scams, another third are techno-utopian treatises lauding this miracle crypto-elixer, and the final third are ads to buy Bitcoin. 

But isn’t this how it’s been from the beginning with digital currencies? There’s such a rush of stories, such overproduction of scenario-building, that one feels as if one is walking through a busy mall with pitches erupting from both sides. 

Cryptocurrencies are wonderful! They can help lift the unbanked out of financial marginalization, mitigating poverty for tens of millions of people.They can make remittances easier, so immigrant workers, refugees, and international gig economy workers can send money to their families and communities. They can facilitate investment in small businesses, green energy, and other values-based investments. An electronic blockchain ledger can (if opened) expose fraud, trafficking, and other illegal dealings. On the other hand, their secure encryption guarantees privacy and independence. They can provide local communities, independence movements and grassroots organizers with financial access tied to neither central banks nor governments. They promise to de-center the dollar as the chief international currency, ending American imperialism! 

AND NOT ONLY THAT, BUT 

Cryptocurrencies are terrible! They can fund (and have funded) terrorists. They can fund (and have funded) organized crime.They can fund (and have funded) child sex trafficking. They can fund (and have funded) nationalist and anti-democratic movements. Their displacement of the dollar may have far-reaching economic disadvantages and turn the U.S. into a belligerent warmonger. They allow totalitarian regimes like North Korea the financial space to evade sanctions and fund, say, a nuclear program.

I realize some of these are contradictory positions. It’s like a hypothesis testing festival, a giant cryptocurrency debate tournament. And all of this might be moot, because cryptocurrency’s biggest challenge is convincing people to use it (probably the most difficult test for any alternative currency, and also the only necessary one, because once enough people recognize the legitimacy of a medium of exchange, it’s functionally legitimate—usually). 

Perhaps Paul Krugman is right: “To be successful,” he recently wrote, “money must be both a medium of exchange and a reasonably stable store of value. And it remains completely unclear why Bitcoin should be a stable store of value.” 

I’m not as smart as Krugman, but I suspect stability won’t always be a problem for cryptocurrency. I think the real problem might be that crypto’s constant manipulation by bad faith actors demonstrates that, while central banks and governments are terrible at controlling money, the curators and curriers of cryptocurrency may not be any better. Maybe everybody tends to hide their personal drive for power behind veneers of “freedom” or “responsibility.” When cryptocurrency is “good,” it’s because humans are good. When it’s “evil,” it’s because we’re evil. Is cryptocurrency just a mirror?

Time Travel Roundup

Writing for Live Science, Adam Mann suggests that the concept of time travel might be hardwired into our brains. Our tendency to conflate time and space in our linguistic structures is possible evidence of this “structural” tendency to believe that time is elastic or relative to space. Adam gives several examples using the work of Israeli linguist Guy Deutcher: the notion of “moving through time the way we move through three-dimensional space” and being “essentially incapable of talking about temporal matters without referencing spatial ones” and how the “around” in “I’ll meet you around lunchtime” are evidence that we think we can move through time as we move through space. Adam also notes that all cultures have “time slip” stories where people fall asleep, lose consciousness, sing a chant, or do other things that result in voluntary or involuntary time travel.  

Conventional wisdom on time travel these days is that we can “travel forward through time” but not backward because backward time travel results in paradoxes (more about that later). This “traveling forward through time” isn’t just a sarcastic joke meaning that we are always traveling forward through time. It seems pretty clear that we can also hack forward time travel by “jumping” forward, even in the most simple example of the “time slows down at the speed of light” narrative (which results in my travel “into the future” if I return to Earth after a few years and find that several decades have passed). 

There are other ways to accelerate/decelerate time, such as designing a controlled (but somehow supermassive) black hole in which time moves more slowly than in the space outside it. Stephen Hawking discussed this in 2010. Travelers near a black hole could travel at half the speed of time, as it were—”Round and round they’d go,” Hawking said (in the voice of Benedict Cumberbatch), “experiencing just half the time of everyone far away from the black hole.” 

But forward time travel seems rather uninspiring. If there’s no way to get back, and no way to get to the past to begin with, time travel is of limited utility. From a utilitarian standpoint, as a society or as individuals, we’d want to travel through time to fix things we are otherwise unable to correct or to learn things about the future, knowledge that is useless if there’s no way to return from the future to the present. Of course, all these things are paradoxical, and that’s exactly why they present the greatest utilitarian cases for time travel—because they overcome the “scarcity of the possible.” 

Quantum theorists see the possibility of non-paradoxical (or transparadoxical) time travel. That they see such a possibility isn’t surprising. The recent development and publicity of quantum computers’ ability to do calculations in minutes that might otherwise take thousands or tens of thousands of years understandably suggest optimism about overcoming temporal limits. 

One quantum-level approach to the paradoxes of traveling into the past is the (Igor) Novikov self-consistency principle, which “asserts that if an event exists that would give rise to a paradox, or to any ‘change’ to the past whatsoever, then the probability of that event is zero. Put another way, “contradictory causal loops cannot form, but that consistent ones can.” Another possible interpretation of the dynamics of such causal loops is that they create parallel universes like bubbles, competing against each other in a Darwin-esque fashion, until the most optimal (non-paradoxical) outcome wins. It would be the ultimate do-over.

Does Quantum Computing Mean the End of Cryptocurrency?

Traditional computing models are actually “pre-modern” in that the model of physics they take as their starting point is “classical.” They rely on formulae more analogous to Newton’s laws of motion than the quantization paradigm. But quantum computers are the order of the day, and they are about to take over the world. Recently, Google used its state-of-the-art quantum computer to complete a complex computational problem in 200 seconds. That’s over three minutes, so before you act unimpressed, it was a problem that would have taken 10,000 years for any non-quantum supercomputer to finish. So okay then.

The prospect of a computer operating 1.5 trillion times faster than its classical predecessors raises a number of questions, one of which is the effect such a quantum leap will have on issues surrounding cryptocurrency. As they are currently manifesting, cryptocurrencies are too unstable to be economically advantageous. They constantly fluctuate in price. Imagine having a hundred dollars in your pocket, but not knowing whether the lunch you buy tomorrow will cost $15 or $75. You would quickly opt out of that monetary system if you could.

Well, add to those troubles a new one brought to you by quantum computing: the ability to break open blockchain. Blockchain refers to the electronic “ledger” of transactions for cryptocurrencies like Bitcoin. That ledger is encrypted and thus the privacy of transactions is preserved, fulfilling cryptocurrency’s original promise to operate independently of governments and central banks. 

But “blockchain transactions are secured with digital signatures based on elliptic curve cryptography (ECC).” And ECC can be broken by quantum computing; this is an oversimplification, but imagine a computer fast enough to go through millions of potential codes in just a few seconds. A quantum computer could thus decrypt users’ private keys and even forge transactions attributed to those users. If cryptocurrency is mostly based on trust, that spells the end of such trust. 

Perhaps, following Jack Matier, the answer lies in instilling quantum security in blockchain. “[A]t some point,” Jack writes, “blockchain developers will need to update the cryptographic portion of their blockchain to be quantum-resistant.” Jack says signature schemes can be upgraded to become “crypto-agile.” And once blockchain schemes are developed with that agility, the total population of users will have to “manually migrate” to the new platform, or else people will find their funds locked up or left defenseless against hacking.

Cryptocurrency came into the world with a lot of promise. It was supposed to give users autonomy and efficiency. It had (and still has) the potential to lift people out of poverty by giving them control of their finances (without stiff banking fees) and making international financial transactions, including remittances, easier. It even has the potential to help autonomous national movements achieve financial independence from their colonizers. The question is whether anything can remain “crypto” in a world of unbelievable and breathtakingly fast quantum computing.

The Cyber Danger Zone

The Peter Parker Principle is the name given to society’s acknowledgment of that immortal quote from Amazing Fantasy #15, the origin of Spider-Man: “With great power there must also come—great responsibility.” The quote even appeared in a United States Supreme Court decision. President Obama used it in 2010.

We may need to revive the quote and principle again, in light of some recent weirdness around cyber-warfare and fears of artificial intelligence: if we aren’t in the “Brave New World” now, I’d certainly love to see where that threshold is crossed. With Google now claiming to process information at hitherto impossible speeds via quantum computing, we have to be a little scared of offensive cyber operations, or the potential of computer autonomy, right? 

We can certainly be concerned about the cyber-ops. The Trump administration has authorized a vague program with no definitions, no indication of what threats exist, or even of what constitutes a threat. The administration won’t say what threats the program exists to counter, but the policy “eases the rules on the use of digital weapons,” and this is a significant departure from traditional defensive cyber-ops—operations that, according to the Cato Institute’s Brandon Valerio and Benjamin Jackson, worked to stop or deter cyberattacks (as much as such a thing is possible) without risking escalation. The authors call the previous approach “low-level counter-responses” that do not increase the severity of inflicted damage. 

That’s a fascinating thing, in a way, that previous administrations had the consciousness to limit their responses, perhaps because they knew that once you escalate, that escalation will come right back at you. The authors actually analyzed several operations and were able to classify non-escalation and escalation scenarios, concluding that “active defense” rather than offense was the most effective and escalation-avoidant framework. 

Second, we have what we could perhaps call “Elon’s Paradox”—that in the face of alleged threats to human autonomy from artificial intelligence, the solution may be to preemptively merge humans with AI technology, cybernetically. Musk isn’t alone in his criticism and fears of AI; the late Steven Hawking and others have long sounded the alarm on AI becoming, as Vladimir Putin recently speculated, a servant to the leader that ends up ruling the world. Musk is afraid it will cause World War Three. 

But Musk’s solution—he wants to increase cybernetic connections between humans and machines in hopes that a merger will be more coequal than AI just outright taking over—seems a little weird. Granted, the technology of projects like Neuralink has tremendous potential to help people heal from brain damage or degeneration, and it’s a fascinating question whether systems can be developed that retain human autonomy while utilizing the potential of AI. 

But it’s not clear how it will prevent the emergence of what Musk calls “godlike superintelligence,” and besides, even leaving that kind of control up to humans is a mixed bag. After all, Google had been supplying technology to the military for drone strikes, promised to stop, and then hedged on its promise. With great power—hopefully—comes great responsibility.

The Weirdest of Weird Tech

Tentacle tech

What it is:  Researchers have managed to replicate octopus flesh, developing “a structure that senses, computes and responds without any centralized processing—creating a device that is not quite a robot and not quite a computer, but has characteristics of both.” 

Why it’s weird and awesome: Its developers call it “soft tactile logic,” and it can “make decisions at the material level” through input and processing on site, rather than a centralized logic system somewhere else. And you might remember credible speculation last year that octopus DNA might come from aliens, which isn’t the only thing that makes it one of the most intriguing creatures on earth. 

But seriously, Biosynthesis is an application of “soft” technology using “neuromuscular tissue that triggers when stimulated by light,” which, if it becomes complex enough, is practically indistinguishable from autonomous biobots. This tech actually goes back to at least 2014, when professors Taher Saif and Rashid Bashir of the University of Illinois developed a bio-mechanical sperm-like thingy. It could swim. Sure, that autonomy could be a little creepy and is the stuff that science fiction disaster scenarios and international regulatory and ethics discussions are made of. But it’s also awesome! Replacement of cells! Cures for heart disease, radical improvements in prosthetic technology and more. 

They tested Loch Ness for DNA

What it was: Two New England geneticists conducted a sweeping environmental DNA survey of the greater Loch Ness area—not just the lake, but also the surrounding ecosystems. They found no sign of giant reptile DNA, aquatic dino-DNA, or any mysterious monster genetics. The scientists found signs of all kinds of creatures—fish (obviously), deer, pigs, bacteria, human tourists, but no Nessie. We’ve known for a while that those famous photos of Nessie were faked. This is another nail in the proverbial coffin. 

Why it’s important: The Monster is iconic across popular cultural and pseudoscience. But it’s also fun, and historically necessary, to bust myths. More importantly, DNA testing still feels like a revolutionary breakthrough, solving real crimes while debunking legends.

The interrupting robot you’ve always wanted

What it is: Do you hate it when other people finish your sentences? What if robots did it? Called “BERT” for Bidirectional Encoder Representations from Transformer, the system uses “natural language processing.” This doesn’t seem to be too much of a stretch from the auto-complete function in texting. But it also does “sentiment analysis,” similar to the way in which businesses and political campaigns can take from masses of data in order to qualitatively analyze subjective information. 

Why it’s inevitable no matter how we feel about it: Because this kind of AI is inevitable. Daniel Shapiro, who founded an AI firm called Lemay.ai, agrees with me on this, and says that “AI does some things well and some things poorly, but on balance, the benefits exceed the costs of having an algorithm making decisions.” As to whether it’s a job killer, Shapiro says”no more than the humble spreadsheet was.” 

Slipping into a new you

What it is: A postgraduate fellow at Central Saint Martins University in London, and a microbiologist at Ghent University in Belgium, have developed “Skin II,” a garment that they say will “improve body odour, encourage cell renewal and boost the immune system.” It also doesn’t need to be washed as often because, you know, reduced odor. One of the designers called Skin II “wellness clothing,” which, all jokes about B.O. aside, sounds exciting.

Why it’s basically necessary: Because odor management is an important part of the management of public spaces. People complain of odors on trains due to smokers, strong perfume, and yes, body odor. Any frequently-used space (and those are the most valuable spaces, really) are going to smell bad. Why not do our part to make it easier to manage those things publicly? Also, L.A. Metro is experimenting with deodorizers on trains, so that’s an interesting supplemental piece of tech news.