Machine Learning Roundup

Machine learning is that branch of Artificial Intelligence (AI) devoted to “teaching” computers to perform tasks without explicit instructions, relying on inferences gained from the absorption and processing of patterns. There’s been pretty amazing analysis in the world of machine learning just in the last month. Citing Crunchbase, Louis Columbus at Forbes  puts the number of startups relying on machine learning “for their main and ancillary applications, products, and services” at a rather stunning 8,705—an increase of almost three fourths over 2017. 

There has been talk of AI as a tool to fight climate change, which is certainly promising, but not without its limits. In order for AI to do this, it needs programs that can learn by example rather than always relying on explicit instruction—important because climate change itself is based on patterns. Machine learning is “not a silver bullet” in this regard, according to a report by scholars at the University of Pennsylvania, along with the cofounder of Google Brain, the founder and CEO of DeepMind, the managing director of Microsoft Research, and a recent winner of the Turing Award. “Ultimately,” we read in Technology Review‘s distillation of the report, “policy will be the main driver for effective large-scale climate action,” and policy also means politics. Nevertheless, machine learning/AI can help us predict how much electricity we’ll need for various endeavors, discover new materials, optimize the hauling of freight, aid in the transition to electric vehicles, improve deforestation tracking, make agriculture more efficient, and much more. It sounds like the uncertainties are not enough to give up the promise. 

The ultimate goal of machine learning may be characterized as “meta-machine learning,” which is in full swing at Google, where researchers are engaged in “reinforcement learning,” literally rewarding AI robots for learning from older data. 

But authors have also been writing about AI/ML’s limitations. Microbiologist Nick Loman warns that machine learning tech is always going to be “garbage in, garbage out” no matter how sophisticated the algorithms get. After all, he says, like statistical models, there’s never a failsafe mechanism for telling you “you’ve done the wrong thing.” This is in line with a piece by Ricardo da Rocha where he likens machine learning models to “children. Imagine that you want to teach a child to distingue dogs and cats. You will present images of dogs and cats and the child will learn based on the characteristics of them. More images you show, [the] better the child will distinguish. After hundreds of images, the child will start to distinguish dogs and cats with an accuracy sufficient to do it without any help. But if you present an image of a chicken, the child will not know what the animal is, because it only knows how to distinguish dogs and cats. Also, if you only showed images of German Shepherd dogs and then you present another kind of dog breed, it will be difficult for the child to actually know if it is a dog or not.”

You may also enjoy watching this astoundingly good 20-minute primer on machine learning.

Has Facebook Gone Flat?

Last year, the average consumer spent 38 minutes per day on Facebook, and that number will remain unchanged this year, before falling to 37 minutes next year. While some of this is attributable to loss of younger adult users, fairness also dictates that we recognize that Facebook invited this shift by emphasizing “time well spent” in place of clickbait. It’s hardly fair to fault the platform for losing a couple of minutes over a couple of years when the whole point of its shift is to emphasize (what it considers to be) quality over quantity. 

Mark Zuckeberg himself seems determined to imitate the persona that partially imitates him on the new season of Black Mirror, Topher Grace’s character Billy Bauer, whose disillusionment with his own creation, the “Persona” platform, is exacerbated by the psychotic break of a user of the platform. Zuckerberg recently wrote an essay of over 3000 words explaining his plan to change Facebook “from a boisterous global town square into an intimate living room,” as the Washington Post put it. The new platform will emphasize private and small-group interactions, removing context and incentives for mass-manipulation and random bullying. Like his Black Mirror counterpart, Zuckerberg seems concerned that his once-progressive idea has become regressive and dangerous. He seems willing to sacrifice the bottom line to correct course. 

But to be fair, the bottom line isn’t looking great. Facebook is losing teens at a rate not explainable by its own format changes. While it still has 2.3 billion users (and so it’s crazy to talk about the company coming anywhere close to tanking), it’s feasible to envision a scenario where the platform’s current ruling status is unseated. 

But whether bottom line panic or crisis of conscience, the moves are sparked by a perception of  “continued breaches of user trust” and the fact remains that fewer people are on the platform, and for less time. And in many ways Facebook is like the once-beloved celebrity that has worn out its welcome. Sparked by the suicide of 14 year-old Molly Russell in 2014, British health Matt Hancock just recently “warned social media firms that they will face legislation if they don’t do a better job of policing the posts made on their platforms.”

Yayit Thakker, in Data Driven Investor, attributes this shift by Facebook to the same consciousness and generational shifts responsible for a new egalitarian political consciousness. “As a generation, we have used our creativity to build and support infectious ideas that have resulted in some of the greatest organizations never seen before, like Google and Facebook,” but “this new kind of power can also be abused — usually without even our realizing it.” Young people seem not to mind tearing it down and rebuilding it if things aren’t working out. Zuckerberg, who isn’t so young anymore, really, seems to want to follow their example. He could do worse. 

Doing Health Care Automation the Right Way

Health technology is an ancient concept. There were prosthetics in ancient Egypt a thousand years before the birth of Jesus, stethoscopes and x-rays emerged in the 19th century, and in the mid-19th century, transistors were developed to aid in implants and computers (which, like most of what we’re discussing in this post, facilitated data-sharing).

Now, the genie is out of the bottle on automated health care systems, from using AI for diagnoses to robots for surgery. The overriding importance of data management in that evolution is undeniable. Healthcare professionals have unprecedented amounts of data “at their fingertips,” but fingertips alone can never effectively manage such data. The promise of automated, or even AI-based management of that data is appealing because it helps those in the profession do what they have set out to do—provide the best possible care to patients.  

The challenge, however, is that knowledge is power, and the optimal distribution of knowledge is not something that just happens by itself. Automation can exacerbate that maldistribution of information because “automation proposals involve solutions that focus on highly structured data,” organizing it takes human resources, and machine-to-machine interfacing involves “complex clinical data flows” that need reliable application programming interfaces. The very complexity of those processes makes systems vulnerable to information blocking—interfering with legitimate access to medical information.

Enter the 2016 Cures act, also called the Increasing Choice, Access, and Quality in Health Care for Americans Act, which does many things including making information blocking punishable by fines of up to $1 million per violation.

The goal here is the facilitation of informational communication: “The Cures Act looked to facilitate communication between the diverse patchworks of healthcare providers and between providers and their patients” by requiring “the electronic players in this space to provide open APIs that can be used ‘without special effort on the part of the user.'”

It is the proliferation of data that makes automation optimal in so many facets of care. The development of low-code frameworks for healthcare workers to build their own applications is another part of this process. There are low-code platforms for databases, business processes, web applications, and low-code “can also work alongside emerging technologies like robotic process automation and artificial intelligence to streamline drug development processes and leverage data to inform life-saving decision making.”

The results of this interactivity are not just glamorous or exceptional lifesaving methodologies. At Health Tech, Josh Gluck writes that AI is automating basic administrative and other tasks that ultimately ought to be the easiest and most automatic parts of the profession.

It’s fascinating to see this interactivity of tech developments, legal changes, and new approaches to data-sharing, which is such a big part of health technology. We’ve come a long way since ancient Egyptian prosthetics.

AI and the Danger of Fake Data

Sapa Profiles, an Oregon-based metals manufacturer, supplied fake data along with its materials to NASA, causing rockets to burst into flames and costing hundreds the agency of millions of dollars. A report alleging fake votes in the recent Indian elections is, in turn, accused of providing fake data. Another report shows cryptocurrency exchanges wildly exaggerate their trading volumes—with fake data. The report says as many as “87% of trading volumes reported by virtual currency exchanges was suspicious.”                                                                                  

In many ways, public knowledge has become simulated reality rather than shared understanding. Jean Baudrillard, a French sociologist and philosopher who died in 2007, wrote Simulacra and Simulation, arguing that public institutions have replaced all reality and meaning with symbols and signs, making human experience more simulation than reality. If that’s true, artificial intelligence must surely make it even more true.

Much has been written about the implications of fake video. “Imagine a jury is watching evidence in your trial,” forensic video expert David Notowitz writes. “A video of the suspect committing murder is playing. The video is clear. The suspect can be identified. His voice is heard. The victim’s mother shouts, “My baby!” The verdict is now a forgone conclusion. He’s convicted and executed. Years later you learn the video of the murder was doctored.” Notowitz notes that we’ve already seen convincing videos of incongruent faces and bodies, engineered through “deep learning,” a type of AI used to create such images. Facebook and Twitter were recently involved in a row involving doctored videos of House Speaker Nancy Pelosi. “Deepfake technology,” Notowitz writes, “is becoming more affordable and accessible.” These systems are improving and rely on “convolutional neural networks,” essentially artificial neurons that learn things.  

Of course, it’s even easier for AI to help create fake non-video data on people, in a manner far more sophisticated than the artificial fake data generators available for systems testing. How might bad actors deploy that kind of “deepfake data?” What if large volumes of fake voter demographic or ideological data were to infect political pollsters or messaging strategists in one or another campaign?  What if state and local governments received fake data in environmental impact assessment? Remember, we aren’t just talking about fudged or distorted data, but data created out of whole-cloth.   

Calls for a universal code of AI ethics should always include calls for enforcement of provisions—or the development of new ones— against the generation of false data. Each example mentioned here could end up being very high-stakes situations—exploding rockets, financial crashes, and so on.

Companies Big and Small Grapple with Data AI Ethics

Five years ago, in The Data Revolution, Rob Kitchin defined “Big Data Ethics” as the construction of systems of right and wrong in relation to the use of (in particular) personal data. The magnitude of data use, and its effect on things like elections and public policy, might have seemed exotic or quaint in 2014, but now we’re seeing companies like Google, Facebook, and others “setting up institutions to support ethical AI” in relation to data use. Google recently created the “Advanced Technology External Advisory Council” with a mission to steer the company towards “responsible development and use” of artificial intelligence, including facial recognition ethics. The advisors are “academics” from the fields of ethics, public policy, and technical applied AI. Entrepreneur.com also reports that the council includes members from all over the world.

It’s certainly a good time for companies to be conspicuously and conscientiously doing things like this. We’re learning that AI can often inadvertently (a strange word to use in this context) behave in ways that, if humans so behaved, we’d call “conspiratorial” or collusive. The Wall Street Journal recently reported (behind a paywall) on algorithms “colluding” to unfairly raise consumer prices. When competing algorithms received “price maximization goals,” they integrated consumer data to figure out where to raise prices, and out-compete one another in doing so.  

But self-governance will always have limits–and those limits are not necessarily attributable to the bad intent of actors in the system. In the case of price “colluding” algorithms, as Andrew White wrote, “[r]etailers have been using neural networks to optimize prices of baskets of good for years, in order to exploit shopping habits.” Advances in AI simply allow the logic of price optimization to run its course without the intervention of retailers’ personal street wisdom about pricing.

And Facebook’s creation of similar advisors seems not to have kept it from asking some new users to provide their email address passwords “as part of the sign-up process,” which is a pretty tremendous failure to read the room by that platform.  

And finally, the teeth that these advisory boards have is, of course, always going to be limited by the will of the companies that support them. In a world where people can legitimately reject the monopoly-like holds of Google or Facebook, the findings of such groups would carry some weight, but that’s proving to be practically impossible. In the end, such hopes also ignore the basic paradox that users want their preferences to matter, but are skittish about having their data mined–what some have called the “personalization and privacy paradox.”
The conversation among data scientists may offer the best guide for ethical practices by corporations. Last year Lucy C. Erickson, Natalie Evans Harris, and Meredith M. Lee, three members of Bloomberg’s Data for Good Exchange (D4GX) community, published “It’s Time to Talk About Data Ethics,” where they bring up the “need for a ‘Hippocratic Oath’ for data scientists,” and report on efforts to hold large conferences and symposia soliciting dozens of proposal papers on codes of ethics, from which working principles could be distilled. It’s scientists using something very much like the scientific method to develop ethics for their own methods. Not a bad model.

Four Ways Big Data Can Teach Us About People

Well, we spent almost two billion dollars on political digital ads last year around the world, and that’s a low number compared to what we’ll spend next year. We don’t need to walk through the dozens of articles published every month on the implications of this, except to say that people who think about the social effects of technology are ever-concerned about big data. Systems are so prone to abuse that some progressive governments are regulating them, Spain being the latest example, with its call “for a Data Protection Officer (DPO), a Data Protection Impact Assessment (DPIA) and security measures for processing high risk data” in elections, and its insistence that “for personal data to be used in election campaigning it must have been ‘freely expressed’ – not just with free will but in the strictest sense of an exercise of the fundamental rights to free expression and freedom of political opinion protected by Articles 16 and 20 of the Spanish Constitution.”

So on the bright side, here are four potentially helpful ways we can engage with big data responsibly, reciprocally, and in the public interest.

1. Tracking Local Political Participation.

“In 2018, three BU political scientists used big data to study local political participation in housing and development policy.” They coded “thousands of instances of people who chose to speak about housing development at planning and zoning board meetings in 97 cities and towns in eastern Massachusetts, then matched the participants with voter and property tax data.” Their findings that the conversations tended to be dominated by older white male homeowners instead of being representative of residents in general can help inform activists of the barriers to participation in policy discussions that exist now. “[T]he dynamic,” the researchers conclude, “contributes to the failure of towns to produce a sufficient housing supply. If local politicians hear predominantly from people opposed to a certain issue, it’s logical that they may be persuaded to vote against it, based on what they think their community wants.”

2. Big Data as Ethnographic Tool

This seems counterintuitive because people think big data contributes to the abstraction of political views and lifestyle preferences, but some researchers are concluding something in the other direction, arguing that “[Big data] can be used as a powerful data-recording tool because it stores (…) actual behaviour expressed in a natural environment,” a practice “we normally associate with the naturalistic ideals of ethnography: ‘Studying people in everyday circumstances by ordinary means’.”  We aren’t just learning numbers; we’re seeing how people behave in everyday life.

3. Cognitive Bias Training

I’m including this because it teaches data readers about themselves. The conversation stems from recent attempts at self-correction by Facebook, Google, and other big companies. Web tester Elena Yakimova spoke to former head of Facebook elections integrity operations Yael Eisenstat, who touts “cognitive bias training [as] the key along with time, better Data Science and bigger, cleaner input data” as ways that those who read that data–and ask the questions–can check their own biases while searching for wider and deeper variables to circumvent their own cognitive (and therefore social) biases.

4. Open Data Days

This is the coolest of all the ideas– it’s a way to engage big data to teach people about people.  In Guatemala and Costa Rica, public officials are creating events like open and participatory election surveys where people can not only participate in the questionnaires, but also examine the results; or participate in examining data for participatory budgeting and other municipal functions. Thus, the “For Whom I Vote?” virtual platform has “users fill a questionnaire that measures their preference with parties participating in the electoral process. This allows each user to identify firstly their own ideological position, but also how closely they are with each political party.” It’s all transparent, and participants learn about the process as they participate.

That commitment to openness is a good way to wrap the post. As data practices evolve, there are opportunities for “dissemination of knowledge in free, open and more inclusive ways.”

Reputation & Read Rate Roundup

In just the last few weeks, perhaps because we’re jumping into new political and marketing campaign seasons, a few articles have popped up on read and response rate. One common denominator in rate enhancement is the maintenance of a good sender reputation.

The most noticeable is probably Dmytro Spilka’s audaciously-titled “How I got 80% open rate in my email outreach campaign,” which lists factors like target identification, a masterful subject line, actually using preview snippets, solid sender reputation, and effective follow-ups. One thing the article could have included at the top of the list, though, is the maintenance of data or list hygiene–updates that correct your recipient addresses and remove “unwanted names, undeliverable addresses, or individuals who have chosen not to receive direct mail offers or who have unsubscribed from email lists.” Data append services like Accurate Append provide this.

But what I like about Dmytro’s post is that it emphasizes the dynamic at work in sender reputation assessment: Email platforms are committed to giving their users a good experience, and that means “whittling down any perceived junk automatically,” while senders want the emails to be seen and opened. Reputation score is the way you negotiate through those competing imperatives. Yannis Psarras at Moosend points out that a good reputation keeps emails out of the recipient’s spam folder. Psarras’s post also has a cute “periodic table of delivery score” that has to be seen for itself, with element-like abbreviations like Fc for fewer complaints, Bl for few bounces, Vo for consistent volume, and so on.

There are various reputation checks that guide deliverability. Consistency is one that a lot of new senders aren’t aware of. “A consistent volume of email campaigns, without major drops or spikes, plays a significant role in sender reputation,” Psarras says. “For example, if you send out an email to your list twice a week, switching to three times a week, will cause ripples. There will be times when you will want to send out more emails than normal. For example over the busy Christmas period. But aim for a regular, consistent schedule where possible.”

A particularly useful document that also posted during the last month is Return Path’s “Sending Best Practices,” a detailed PDF listing the top factors that impact sending reputation and deliverability. The document discusses complaints (you need complaint feedback loops to suppress complainers from future versions of that list), getting rid of “unknown users” after the first bounce, opt-in permission methods, and one often-neglected piece of the puzzle: giving subscribers good, relevant content when they indicate they want to receive messages from you.

2019 and 2020 should be heavy-rain years for voter- and consumer- directed emails. Systemic use of sending best practices and good data hygiene is going to be the key to recipient engagement rather than landing in the spam folder or, worse, finding yourself in email sender jail, unable to get your messages out.

Musings on Nonprofits, Advocacy Campaigns, and Big Data

I’m a big fan of how entrepreneurs can use and manage data, but nonprofits have to use and manage data too. Most people know (or would not be surprised to learn) that data append services help nonprofits data-cleanse at the end of the year. This is vital when you devote so much time to finding new donors and keeping consistent donors in the loop–and keeping up with changes in their contact info.

But what about other facets of data management in nonprofits? Specifically, what about nonprofits’ relationship to “big data,” or data sets “too large or complex to be dealt with by traditional data-processing application software,” as Wikipedia defines the term? Interestingly, we’ve recently seen several articles on big data and nonprofits, and depending on which article you read, you might conclude that nonprofits can easily use big data, that nonprofits can only ride on the coattails of private businesses that use big data, or you may learn many ways your organization can both acquire and use it.

You can access some big data for free
Kayla Matthews’ piece last September at Smart Data Collective points out that nonprofits who can’t afford costly data platforms can get free data sources mediated by entities as diverse as Amazon, Pew Research and the U.S. Food and Drug Administration. They offer “open data” aggregation and platform services that interest groups can use at no cost. There’s also a group called the Nonprofit Open Data Collective, “a consortium of nonprofit representatives and researchers [that] is working to analyze electronically submitted Form 990 data and make it more useful to everyone who wants to see it.”

There are high-visibility organizations using it
Matthews provides a couple of powerful anecdotes in her post from last October, including the Jane Goodall Institute’s use of data entered by private citizens throughout Africa speaking to the status of and threats to chimpanzee populations, and UNICEF’s dissemination of health stats like infant mortality into public hands. It’s not just about being nice, though, as Matthews points out: “Viewing the hard data for themselves might encourage individuals to give generously when it’s time for fundraising campaigns.”

One of our favorite new apps–and new approaches–is Branch, which builds on the success of Kiva, a great platform helping small entrepreneurs –such as beginning family farmers– crowdsource startup loans. It turns out that Kiva’s co-founder, Matthew Flannery, started Branch as a new nonprofit, hoping to solve a challenge that came to be associated with Kiva: “Due to having limited connectivity, loan officers in those countries would have to travel to each borrower to distribute the money, resulting in additional costs. However, with mobile dominating digital technology worldwide, it’s now becoming possible to skip the loan officers entirely and send the money directly to the borrower via mobile. Flannery wants to use machine learning to assist with making sound lending decisions and swiftly deliver loans via mobile payment.” Pretty cool.

So what’s the problem?
For smaller organizations, the problem may simply be scale of human resources to data. In small organizations, people have a lot of hats to wear and no one person may have the capacity and training for big data management. But there may be other challenges intrinsic to the models and iterations of nonprofits. The bloggers at Pursuant say that a leading problem nonprofits have with big data is that they compartmentalize it too much. “Most organizations already have a lot of data,” they write, “but they store it in departmental silos . . . Instead of synthesizing the data from all sources, nonprofits look at one area at a time. But that approach doesn’t unleash the power of big data. Information gleaned from donor data files, special events, emails opened or closed, and what donors click on at your website must be looked at holistically. But doing that requires breaking out of departmental silos. Diffused data isn’t good for the donor, your mission, or your organization’s long-term sustainability.”

So there’s capacity, but there’s also too much diffusion of data across areas that don’t interact much with each other, and so offer no incentive or expertise on data synthesis.

Good data management
Avery Phillips, writing for Inside Big Data, suggests that taking data management to the next level “requires a structured approach that incorporates cleaning up the data (e.g. paring it down to genuinely useful and trusted information) and creating larger networks of employees involved in the decision-making process beyond those that are tasked with handling the data itself.” Your organization might not be big enough to do that yourself, so consultants may be inevitable. While that costs money, the stories in the various posts we read, including the UNICEF example, suggests it could make your organization even more money. It’s up to you whether you are at the level you believe is appropriate for the service you need. The important thing, Phillips says, is that your IT team isn’t the only department “aware of pertinent big data that might influence” an organizational decision. What you want is “a larger umbrella of team members . . .  incorporated into the ‘web of knowledge’ that big data can provide. . . helping them to maneuver themselves into the ever-crowded spotlight, communicate their mission statement effectively, and raise funds at unprecedented rates.”

So, while data management services are available no matter what, the challenge and promise is in managing your data holistically and with as many voices included in the analysis as possible.

Don’t Get Berned by Text Scamming

Last week, lots of folks noticed an unusually heavy rainstorm of texts from sources claiming to be associated with the Bernie Sanders for President campaign–although the campaign had only just begun. One responses was from Anne Laurie, a Daily Kos diarist, who says she doesn’t text from her “secondhand Galaxy S6” and doesn’t provide her cell number to anyone. Nevertheless, she received numerous texts from Bernie supporters in the immediate hours after his campaign announcement. She concluded they really were from the campaign or from legitimate supporters — which irritated her even more, because she doesn’t presently support Sanders in the presidential primary.

Bernie supporters, on the other hand, may be particularly vulnerable to texting scams claiming to be affiliated with the Sanders campaign. After all, they’re an enthusiastic bunch and like to know there are like-minded people eager to meet them. The biggest concern with text scamming, or “SMishing” for “phishing” via SMS, is identity theft. Viruses are also a concern. And as online donations surge for political campaigns, avoiding scam links will become more of a challenge. 

SMishing may be growing as robocalls decrease in effectiveness (The Atlantic says “telephone culture is disappearing”). It’s true that robocalls have been used to impersonate campaigns (or sometimes do the really nefarious dirty work of racist campaigns) and continue to be used to run scams. While we were working on this article, CNN reported on a group running robocalls impersonating Donald Trump that netted $100,000, got media coverage as a scam and, at the time of this writing, was still going strong. But as New York Magazine’s Jake Swearingen wrote just a couple of weeks ago, we may be done with robocalls as a thing, since carriers now have both the technology and the incentive to block or radically screen calls–although it’s not so clear whether the same would be true for texts.

In a brand new study report, “Hamsini Sridharan of MapLight and Samuel Woolley of the Institute for the Future outline more than 30 concrete proposals—all grounded in the democratic principles of transparency, accountability, standards, coordination, adaptability, and inclusivity–to protect the integrity of the future elections, including the pivotal 2020 U.S. presidential election.” The authors argue that both anonymity and automation “have made deceptive digital politics especially harmful.” They locate solutions in public policy and legal liability (including increasing the liability of the platforms themselves, a proposal sure to raise a lot of eyebrows). But they also emphasize routes like public education and ethical guidelines embraced by the media.

Enter the Direct Marketing Association’s code of ethics–not the law, to be sure, but norms that we can hope the political industry will embrace consistently enough that those who do go outside the lines will be seen as exceptional pariahs. More important even than the code’s numerous provisions, compliance with which would end scammy or even opportunistic SMS texts, is the overall spirit of the code, a customer-centric, privacy-embracing document. And, at least two provisions, 1.3 and 3.11 require that your data be cleaned regularly, which data append services will do for you.

In the meantime, we can all take some simple preventative measures. Obviously, don’t open any attachments sent via text. And whatever you do, don’t text back! if you don’t recognize the source of a textaccording to the Federal Trade Commission, not answering is the best way to avoid negative consequences to your identity or your smartphone. While scam and false pretense texts are clearly illegal, there are some opportunistic texting schemes that may slip through the cracks of the law. Under federal law, unsolicited messages and emails are illegal, and both textual and phone “robocalls” are too, but there are exceptions for political surveys, fundraising messages from charities, and popular peer-to-peer texting apps–understandable exceptions, but ones that may be easy for smart and crappy people to manipulate.

Looking into 2019, AI and Data Revisited

At the end of 2018 I published a post citing Cynthia Harvey, who had herself cited a Deloitte survey with bleak predictions tabout companies’ use of AI. That survey had indicated that while more companies (in various industries, not just marketing or campaigning) were dipping their toes into AI, the number of companies abandoning AI projects was also high. Left alone, that citation might give readers the impression that AI is floundering.

It’s actually not floundering at all. What should not be lost is that 37 percent of all organizations have implemented some kind of artificial intelligence in their operations, and that’s a 270 percent increase over the past four years. Even if the long-term rise has its hiccups, that’s an astounding level of adaptation in a short increment of time.

There are several reasons AI helps companies use data across the board. AI can be created with “context awareness” that will allow systems to discern when they are most needed, switch their modes around, and more. In terms of organizational processes, new artificial intelligence systems can facilitate decentralization and delegation of tasks in organizations practices that increase profits in a time when very few things can reliably increase profits.

The implications of this technology are staggering. AI can help policymakers solve poverty, help doctors slow the spread of diseases, help scientists address climate change, threats to oceans, and more. Artificial intelligence can also be a powerful tool in deploying the natural intelligence of salespeople and analysts through data analysis that can supplement other data services. In marketing and campaign data analysis, one function of AI is to detect small and subtle changes in consumer or voter behavior, attitudes, beliefs, demographics, and so on. If income has risen ever-so-slightly in some area, you may be able to look for other signs of income profile changes or even gentrification; this could impact your campaign strategy.

And, of course, AI helps in segmentation. In campaign technology, segmentation was critical in Barack Obama’s 2008 campaign, where different videos were shown to audiences based on their own level of commitment to the campaign. Consider that it’s been just over ten years since then and AI has advanced considerably. Now, marketers and retailers are using AI to create personalized customer experiences, analyzing data so that customers may be notified by email, direct mail, or SMS if products arrive that they might like. Campaign data analysts can do the same thing.

The beauty of AI, or at least a very important additional benefit, may be the protection of privacy. For example, the folks at Demografy have designed a market segmentation platform that can give you demographic insights or help you append lists with missing data similar to the smart algorithms we use at Accurate Append to create wealth scores and green scores. It does this without gathering sensitive information like addresses or emails, just using names, and it can do this because of its AI component, using “very scarce and non-sensitive information as input while existing technologies use either large amount of data or sensitive personal information to detect demographics.”