Tiny Air Vehicles Roundup

Back when we were more optimistic about the effects of technology on society and everyday life, the joke was that if we don’t get jet packs, all that technology isn’t worth the effort. The joke facilitated the naming of a great Scottish indie group formed in 2003, and lots of social media about personal air vehicles.

And although we haven’t seen the pace of development and mainstreaming anticipated in older speculative representations of 21st century life, PAVs are the subject of considerable R&D both in the private sector and the public (NASA has a Personal Air Vehicle Sector Project under the umbrella of its Aeronautics Vehicle Systems Program).

Here are some updates on the big three of small flying machines, hoverboards, flying “cars,” and yes, jetpacks.

The Hoverboards

At the beginning of August, “[a]fter a failed attempt at the end of July, French inventor Franky Zapata successfully crossed the English Channel . . . on his Flyboard Air, a jet-powered hoverboard.” Apparently the challenge the first time was that the waves were too high en route to a refueling platform, highlighting the need for small vehicles to have adequate power supplies. Technically, a hoverboard may not even qualify as a “flying” vehicle because you aren’t really up so high, but both the practical application and the presumed fun of traveling on one warrants inclusion on this list. Here’s the footage, which might make you cry, since Zapata’s supporters all do when he successfully crosses the Channel.

The Flying “Cars”

These are serious business. No Chitty Chitty Bang Bang here. Even Boeing is in the game, and they’ve partnered with a boutique tech developer called, appropriately enough, Kitty Hawk, to develop the Cora, a two-seat semi-autonomous flying taxi (we are salivating).

Kitty Hawk also has the Flyer, and the video on this page shows that spider-like vehicle quietly flying over various bodies of water and a shadowy desert and hill landscape while the designer talks about his dream of building flying machines. Kind of inspiring.

The (we were indeed promised) jet packs

As you can imagine, there’s a never-ending stream of prototypes for the jet pack. But the most interesting recent project is British entrepreneur Richard Browning’s “real-life Iron Man suit,” essentially a set of jet engines attached to the pilot’s arms and legs. Browning’s start-up, Gravity, has “filed patents for the human propulsion technology that could re-imagine manned flight,” including the jet-engine suit, called Daedalus. A beefier test flight is expected “in the next 12 months.”

Although there are a few different videos of Browning and Daedalus in flight, this simple debut footage might be the most elegant—the guy just smoothly and symmetrically floats around and lands where he took off—all with confidence that gets you thinking about the many applications of such flight.

Big Data and the Final Frontier

It may not have been as entertaining as a Star Trek fan convention, but last February in Munich, the European Space Agency and a handful of other EU organizations hosted the Big Data from Space conference, where hundreds of papers were read on the methods and applications of big space data. The conference brought together “researchers, engineers, developers, and users in the area of Big Data from Space.” 

Because of those massive amounts of generated bits of info, space practitioners use big data analysis for “fast analysis and visualization of data,” and the development of fail safe systems in space—and on earth.

There is no space tech development without big data development and, as we know, space tech development is one of the starkest examples of specialized technology carrying indirect benefits to other parts of society. In some cases, the benefits are more direct than indirect. Newly designed satellites will improve our ability to measure methane gas in the atmosphere and down on earth. The Environmental Defense Fund recently announced a competition awarding $1.5 million to either Ball Aerospace and SSL to design the satellite and, upon winning the competition, build it in two years or less. Meanwhile, last September, outgoing California governor Jerry Brown “announced at the Global Climate Action Summit that California would be placing its own satellite into orbit to measure greenhouse gases. That satellite will work in tandem with the EDF equipment.”

While saving the planet is certainly laudable, big space data has a sexier application: to identify the possibilities of extraterrestrial life. This is the “most important question” for Mars exploration, according to Anita Kirkovska of Space Decentral. Systems like Elasticsearch crunch Martian data, generated by Curiosity in huge amounts, checking surface temperatures and atmospheric conditions, and a multitude of other stati, helping facilitate discoveries like Curiosity’s identification of organic molecules and methane in the Martian air in June, and next year’s ExoMars mission. The spatially huge Square Kilometre Array radio telescope project “will generate up to 700 terabytes of data per second,” around the “amount of data transmitted through the internet every two days.”

These are the voyages of big data analysis in space, its continuing mission to make sense of literally infinite fields of data generation beyond the earth’s mesosphere.

No Debate Championship for Artificial Intelligence

It wasn’t a “roast” like Dan Robitzski of Futurism says it was, but in February, Harish Natarajan, a former championship college debater, won the audience vote over Project Debater, an IBM program designed to respond to its opponents in a debate and crystalize and summarize the issues in summation. The debate was about education subsidies. Project Debater was for them, Natarajan against. 

It’s likely that Harish “won” the debate because he was able to better contextualize, rhetorically drive home, and make comparisons in his final speeches—the real ethos-meets-logos-type factor of good debating, as much an art as a science. Meta-analysis and rhetoric are both inexact—in form as well as content—so an experienced debater does more than just generate and counter information.

The video is worth watching: Project Debater made eloquent quotes, provided evidentiary support, answered arguments on-point, and even uttered the phrase familiar to debaters, “the benefits outweigh the disadvantages.” But that’s a stock phrase. It’s not nuanced comparison. Nuanced comparisons (including things like strategic concessions, admitting the other side is right about something in order to win a larger point) require abstract and metaphorical thinking; not much, but enough.

As Mindy Weisberger writes: “In a neural network, deep learning enables AI to teach itself how to identify disease, win a strategy game against the best human player in the world, or write a pop song. But to accomplish these feats, any neural network still relies on a human programmer setting the tasks and selecting the data for it to learn from. Consciousness for AI would mean that neural networks could make those initial choices themselves.” And, subjective experience is part of what’s required to do those things. 

There are signs that machine learning has the capacity to do this, but little of that was on display during the debate. Stanislas Dehaene and colleagues list “global availability,” the relationship between cognition and the object of cognition, and “self-monitoring,” obtaining and processing information about oneself, as components of consciousness in thinking beings. Both of those attributes would help a debater-AI unit make meaningful, contextually appropriate comparisons between arguments, as well as discern strategic concessions (an unreflective computer probably “thinks” it’s winning every point in a debate). 

For a few more years, at least, humans are safe in debates and a few other spheres of public life.

‘Weird Tech’ Roundup

“Anything that was in the world when you were born,” Douglas Adams wrote, “is normal and natural. Anything invented between when you were 15 and 35 is new and revolutionary and exciting, and you’ll probably get a career in it. Anything invented after you’re 35 is against the natural order of things.” We don’t know what’s against the natural order of things, but we do have a term for historically anomalous tech. From a historical paradigm, “out-of-place-artifacts” refer to old objects that “seem to show a level of technological advancement incongruous with the times in which they were made.” This includes things like caves in China that contain what appear to be 150,000 year-old water pipes; a hammer found in a eons-old rock formation; and what appears to be a spark plug encased in a geode, dated half-a-million years old.

But these are historical, or pre-historical anomalies popular with students of the abnormal. Out-of-place technology also might refer to contemporary items that seem to serve no useful purpose, or serve such a miraculously useful purpose that we wonder why the tech sector waited so long to create them. Every year, a few publications run with the “weirdest tech” of the previous year. The results are stimulating. For the list of 2017’s strangest and most exotic tech, Glenn McDonald of InfoWorld listed robots that weighed over 300 pounds and were capable of doing perfect backflips (no useful purpose) and miniature nuclear reactors capable of powering cities. 

Sometimes the weirdness comes from anomalous performance or phenomena, almost always unintentional, that cause concern for how an item is used or received. So in Stuart Turton’s “strangest ever tech stories,” we find Dell Laptops (the Latitude 360 to be precise) that smelled like pee, Syrian hackers overwhelming the BBC Weather Twitter feed to make fat people jokes, and the allegation that Google’s Street View car killed a donkey. 

Other items that serve no useful purpose are those that are incredibly expensive, available only to the top .1 percent, and steeped in decadence: a remote-controlled pink Leg Air Massager women can “wear” under a desk, or a robotic dog that will roll over, beg for you to pat them on the head, and do other tricks, all for the amazing price of over $2800. 

These excesses of technology might tell us why noted Luddite Edward Abbey once wrote: “High technology has done us one great service: It has retaught us the delight of performing simple and primordial tasks—chopping wood, building a fire, drawing water from a spring.” So in that sense, maybe the price is worth it.

Machine Learning Roundup

Machine learning is that branch of Artificial Intelligence (AI) devoted to “teaching” computers to perform tasks without explicit instructions, relying on inferences gained from the absorption and processing of patterns. There’s been pretty amazing analysis in the world of machine learning just in the last month. Citing Crunchbase, Louis Columbus at Forbes  puts the number of startups relying on machine learning “for their main and ancillary applications, products, and services” at a rather stunning 8,705—an increase of almost three fourths over 2017. 

There has been talk of AI as a tool to fight climate change, which is certainly promising, but not without its limits. In order for AI to do this, it needs programs that can learn by example rather than always relying on explicit instruction—important because climate change itself is based on patterns. Machine learning is “not a silver bullet” in this regard, according to a report by scholars at the University of Pennsylvania, along with the cofounder of Google Brain, the founder and CEO of DeepMind, the managing director of Microsoft Research, and a recent winner of the Turing Award. “Ultimately,” we read in Technology Review‘s distillation of the report, “policy will be the main driver for effective large-scale climate action,” and policy also means politics. Nevertheless, machine learning/AI can help us predict how much electricity we’ll need for various endeavors, discover new materials, optimize the hauling of freight, aid in the transition to electric vehicles, improve deforestation tracking, make agriculture more efficient, and much more. It sounds like the uncertainties are not enough to give up the promise. 

The ultimate goal of machine learning may be characterized as “meta-machine learning,” which is in full swing at Google, where researchers are engaged in “reinforcement learning,” literally rewarding AI robots for learning from older data. 

But authors have also been writing about AI/ML’s limitations. Microbiologist Nick Loman warns that machine learning tech is always going to be “garbage in, garbage out” no matter how sophisticated the algorithms get. After all, he says, like statistical models, there’s never a failsafe mechanism for telling you “you’ve done the wrong thing.” This is in line with a piece by Ricardo da Rocha where he likens machine learning models to “children. Imagine that you want to teach a child to distingue dogs and cats. You will present images of dogs and cats and the child will learn based on the characteristics of them. More images you show, [the] better the child will distinguish. After hundreds of images, the child will start to distinguish dogs and cats with an accuracy sufficient to do it without any help. But if you present an image of a chicken, the child will not know what the animal is, because it only knows how to distinguish dogs and cats. Also, if you only showed images of German Shepherd dogs and then you present another kind of dog breed, it will be difficult for the child to actually know if it is a dog or not.”

You may also enjoy watching this astoundingly good 20-minute primer on machine learning.

Has Facebook Gone Flat?

Last year, the average consumer spent 38 minutes per day on Facebook, and that number will remain unchanged this year, before falling to 37 minutes next year. While some of this is attributable to loss of younger adult users, fairness also dictates that we recognize that Facebook invited this shift by emphasizing “time well spent” in place of clickbait. It’s hardly fair to fault the platform for losing a couple of minutes over a couple of years when the whole point of its shift is to emphasize (what it considers to be) quality over quantity. 

Mark Zuckeberg himself seems determined to imitate the persona that partially imitates him on the new season of Black Mirror, Topher Grace’s character Billy Bauer, whose disillusionment with his own creation, the “Persona” platform, is exacerbated by the psychotic break of a user of the platform. Zuckerberg recently wrote an essay of over 3000 words explaining his plan to change Facebook “from a boisterous global town square into an intimate living room,” as the Washington Post put it. The new platform will emphasize private and small-group interactions, removing context and incentives for mass-manipulation and random bullying. Like his Black Mirror counterpart, Zuckerberg seems concerned that his once-progressive idea has become regressive and dangerous. He seems willing to sacrifice the bottom line to correct course. 

But to be fair, the bottom line isn’t looking great. Facebook is losing teens at a rate not explainable by its own format changes. While it still has 2.3 billion users (and so it’s crazy to talk about the company coming anywhere close to tanking), it’s feasible to envision a scenario where the platform’s current ruling status is unseated. 

But whether bottom line panic or crisis of conscience, the moves are sparked by a perception of  “continued breaches of user trust” and the fact remains that fewer people are on the platform, and for less time. And in many ways Facebook is like the once-beloved celebrity that has worn out its welcome. Sparked by the suicide of 14 year-old Molly Russell in 2014, British health Matt Hancock just recently “warned social media firms that they will face legislation if they don’t do a better job of policing the posts made on their platforms.”

Yayit Thakker, in Data Driven Investor, attributes this shift by Facebook to the same consciousness and generational shifts responsible for a new egalitarian political consciousness. “As a generation, we have used our creativity to build and support infectious ideas that have resulted in some of the greatest organizations never seen before, like Google and Facebook,” but “this new kind of power can also be abused — usually without even our realizing it.” Young people seem not to mind tearing it down and rebuilding it if things aren’t working out. Zuckerberg, who isn’t so young anymore, really, seems to want to follow their example. He could do worse. 

Doing Health Care Automation the Right Way

Health technology is an ancient concept. There were prosthetics in ancient Egypt a thousand years before the birth of Jesus, stethoscopes and x-rays emerged in the 19th century, and in the mid-19th century, transistors were developed to aid in implants and computers (which, like most of what we’re discussing in this post, facilitated data-sharing).

Now, the genie is out of the bottle on automated health care systems, from using AI for diagnoses to robots for surgery. The overriding importance of data management in that evolution is undeniable. Healthcare professionals have unprecedented amounts of data “at their fingertips,” but fingertips alone can never effectively manage such data. The promise of automated, or even AI-based management of that data is appealing because it helps those in the profession do what they have set out to do—provide the best possible care to patients.  

The challenge, however, is that knowledge is power, and the optimal distribution of knowledge is not something that just happens by itself. Automation can exacerbate that maldistribution of information because “automation proposals involve solutions that focus on highly structured data,” organizing it takes human resources, and machine-to-machine interfacing involves “complex clinical data flows” that need reliable application programming interfaces. The very complexity of those processes makes systems vulnerable to information blocking—interfering with legitimate access to medical information.

Enter the 2016 Cures act, also called the Increasing Choice, Access, and Quality in Health Care for Americans Act, which does many things including making information blocking punishable by fines of up to $1 million per violation.

The goal here is the facilitation of informational communication: “The Cures Act looked to facilitate communication between the diverse patchworks of healthcare providers and between providers and their patients” by requiring “the electronic players in this space to provide open APIs that can be used ‘without special effort on the part of the user.'”

It is the proliferation of data that makes automation optimal in so many facets of care. The development of low-code frameworks for healthcare workers to build their own applications is another part of this process. There are low-code platforms for databases, business processes, web applications, and low-code “can also work alongside emerging technologies like robotic process automation and artificial intelligence to streamline drug development processes and leverage data to inform life-saving decision making.”

The results of this interactivity are not just glamorous or exceptional lifesaving methodologies. At Health Tech, Josh Gluck writes that AI is automating basic administrative and other tasks that ultimately ought to be the easiest and most automatic parts of the profession.

It’s fascinating to see this interactivity of tech developments, legal changes, and new approaches to data-sharing, which is such a big part of health technology. We’ve come a long way since ancient Egyptian prosthetics.

AI and the Danger of Fake Data

Sapa Profiles, an Oregon-based metals manufacturer, supplied fake data along with its materials to NASA, causing rockets to burst into flames and costing hundreds the agency of millions of dollars. A report alleging fake votes in the recent Indian elections is, in turn, accused of providing fake data. Another report shows cryptocurrency exchanges wildly exaggerate their trading volumes—with fake data. The report says as many as “87% of trading volumes reported by virtual currency exchanges was suspicious.”                                                                                  

In many ways, public knowledge has become simulated reality rather than shared understanding. Jean Baudrillard, a French sociologist and philosopher who died in 2007, wrote Simulacra and Simulation, arguing that public institutions have replaced all reality and meaning with symbols and signs, making human experience more simulation than reality. If that’s true, artificial intelligence must surely make it even more true.

Much has been written about the implications of fake video. “Imagine a jury is watching evidence in your trial,” forensic video expert David Notowitz writes. “A video of the suspect committing murder is playing. The video is clear. The suspect can be identified. His voice is heard. The victim’s mother shouts, “My baby!” The verdict is now a forgone conclusion. He’s convicted and executed. Years later you learn the video of the murder was doctored.” Notowitz notes that we’ve already seen convincing videos of incongruent faces and bodies, engineered through “deep learning,” a type of AI used to create such images. Facebook and Twitter were recently involved in a row involving doctored videos of House Speaker Nancy Pelosi. “Deepfake technology,” Notowitz writes, “is becoming more affordable and accessible.” These systems are improving and rely on “convolutional neural networks,” essentially artificial neurons that learn things.  

Of course, it’s even easier for AI to help create fake non-video data on people, in a manner far more sophisticated than the artificial fake data generators available for systems testing. How might bad actors deploy that kind of “deepfake data?” What if large volumes of fake voter demographic or ideological data were to infect political pollsters or messaging strategists in one or another campaign?  What if state and local governments received fake data in environmental impact assessment? Remember, we aren’t just talking about fudged or distorted data, but data created out of whole-cloth.   

Calls for a universal code of AI ethics should always include calls for enforcement of provisions—or the development of new ones— against the generation of false data. Each example mentioned here could end up being very high-stakes situations—exploding rockets, financial crashes, and so on.

Companies Big and Small Grapple with Data AI Ethics

Five years ago, in The Data Revolution, Rob Kitchin defined “Big Data Ethics” as the construction of systems of right and wrong in relation to the use of (in particular) personal data. The magnitude of data use, and its effect on things like elections and public policy, might have seemed exotic or quaint in 2014, but now we’re seeing companies like Google, Facebook, and others “setting up institutions to support ethical AI” in relation to data use. Google recently created the “Advanced Technology External Advisory Council” with a mission to steer the company towards “responsible development and use” of artificial intelligence, including facial recognition ethics. The advisors are “academics” from the fields of ethics, public policy, and technical applied AI. Entrepreneur.com also reports that the council includes members from all over the world.

It’s certainly a good time for companies to be conspicuously and conscientiously doing things like this. We’re learning that AI can often inadvertently (a strange word to use in this context) behave in ways that, if humans so behaved, we’d call “conspiratorial” or collusive. The Wall Street Journal recently reported (behind a paywall) on algorithms “colluding” to unfairly raise consumer prices. When competing algorithms received “price maximization goals,” they integrated consumer data to figure out where to raise prices, and out-compete one another in doing so.  

But self-governance will always have limits–and those limits are not necessarily attributable to the bad intent of actors in the system. In the case of price “colluding” algorithms, as Andrew White wrote, “[r]etailers have been using neural networks to optimize prices of baskets of good for years, in order to exploit shopping habits.” Advances in AI simply allow the logic of price optimization to run its course without the intervention of retailers’ personal street wisdom about pricing.

And Facebook’s creation of similar advisors seems not to have kept it from asking some new users to provide their email address passwords “as part of the sign-up process,” which is a pretty tremendous failure to read the room by that platform.  

And finally, the teeth that these advisory boards have is, of course, always going to be limited by the will of the companies that support them. In a world where people can legitimately reject the monopoly-like holds of Google or Facebook, the findings of such groups would carry some weight, but that’s proving to be practically impossible. In the end, such hopes also ignore the basic paradox that users want their preferences to matter, but are skittish about having their data mined–what some have called the “personalization and privacy paradox.”
The conversation among data scientists may offer the best guide for ethical practices by corporations. Last year Lucy C. Erickson, Natalie Evans Harris, and Meredith M. Lee, three members of Bloomberg’s Data for Good Exchange (D4GX) community, published “It’s Time to Talk About Data Ethics,” where they bring up the “need for a ‘Hippocratic Oath’ for data scientists,” and report on efforts to hold large conferences and symposia soliciting dozens of proposal papers on codes of ethics, from which working principles could be distilled. It’s scientists using something very much like the scientific method to develop ethics for their own methods. Not a bad model.

Four Ways Big Data Can Teach Us About People

Well, we spent almost two billion dollars on political digital ads last year around the world, and that’s a low number compared to what we’ll spend next year. We don’t need to walk through the dozens of articles published every month on the implications of this, except to say that people who think about the social effects of technology are ever-concerned about big data. Systems are so prone to abuse that some progressive governments are regulating them, Spain being the latest example, with its call “for a Data Protection Officer (DPO), a Data Protection Impact Assessment (DPIA) and security measures for processing high risk data” in elections, and its insistence that “for personal data to be used in election campaigning it must have been ‘freely expressed’ – not just with free will but in the strictest sense of an exercise of the fundamental rights to free expression and freedom of political opinion protected by Articles 16 and 20 of the Spanish Constitution.”

So on the bright side, here are four potentially helpful ways we can engage with big data responsibly, reciprocally, and in the public interest.

1. Tracking Local Political Participation.

“In 2018, three BU political scientists used big data to study local political participation in housing and development policy.” They coded “thousands of instances of people who chose to speak about housing development at planning and zoning board meetings in 97 cities and towns in eastern Massachusetts, then matched the participants with voter and property tax data.” Their findings that the conversations tended to be dominated by older white male homeowners instead of being representative of residents in general can help inform activists of the barriers to participation in policy discussions that exist now. “[T]he dynamic,” the researchers conclude, “contributes to the failure of towns to produce a sufficient housing supply. If local politicians hear predominantly from people opposed to a certain issue, it’s logical that they may be persuaded to vote against it, based on what they think their community wants.”

2. Big Data as Ethnographic Tool

This seems counterintuitive because people think big data contributes to the abstraction of political views and lifestyle preferences, but some researchers are concluding something in the other direction, arguing that “[Big data] can be used as a powerful data-recording tool because it stores (…) actual behaviour expressed in a natural environment,” a practice “we normally associate with the naturalistic ideals of ethnography: ‘Studying people in everyday circumstances by ordinary means’.”  We aren’t just learning numbers; we’re seeing how people behave in everyday life.

3. Cognitive Bias Training

I’m including this because it teaches data readers about themselves. The conversation stems from recent attempts at self-correction by Facebook, Google, and other big companies. Web tester Elena Yakimova spoke to former head of Facebook elections integrity operations Yael Eisenstat, who touts “cognitive bias training [as] the key along with time, better Data Science and bigger, cleaner input data” as ways that those who read that data–and ask the questions–can check their own biases while searching for wider and deeper variables to circumvent their own cognitive (and therefore social) biases.

4. Open Data Days

This is the coolest of all the ideas– it’s a way to engage big data to teach people about people.  In Guatemala and Costa Rica, public officials are creating events like open and participatory election surveys where people can not only participate in the questionnaires, but also examine the results; or participate in examining data for participatory budgeting and other municipal functions. Thus, the “For Whom I Vote?” virtual platform has “users fill a questionnaire that measures their preference with parties participating in the electoral process. This allows each user to identify firstly their own ideological position, but also how closely they are with each political party.” It’s all transparent, and participants learn about the process as they participate.

That commitment to openness is a good way to wrap the post. As data practices evolve, there are opportunities for “dissemination of knowledge in free, open and more inclusive ways.”