Ignore the Headlines and Embrace the Bots

In 2018, bots became even more prevalent in the marketplace. According to a study by Distil Networks, a leading bot security company, almost half of web traffic (42.2%) in 2017 was not human. Though some may find this trend surprising or even alarming, bot traffic has been growing consistently for the last five years as more companies add bots into their workflow systems. What has proven a growing concern, according to the media, is the influx and rise of “bad bots.”

Bots can be incredibly helpful by processing mundane or repetitive tasks and allowing humans the opportunity to do more creative, thoughtful work. Bots have been adopted to conduct customer service tasks, help curate individual products for users, among other activities. But, there are also bad bots, which were first taken note of when used to buy tickets online and then offer the same tickets for a much higher resale value. These bots are also responsible for stealing personal information, social media harassment, disrupting the marketplace, and, in the largest show of bot activity, potentially impacting the 2016 US presidential election. The presence and prevalence of bad bots is increasing too, with bad bot traffic up 10% last year, slightly outflanking good bot traffic (21.8% of total web traffic is bad bot vs. 20.4% for good bots).

What makes bots unique is that they tend to mimic, and mimic very well, human behavior. That is what makes bad bots particularly difficult to battle, because they are often very difficult to detect. The existence and growing pervasiveness of bad bots adds to the public concern about the implications of artificial intelligence and whether or not AI can “turn against humans.”

But, like with any technology, security and defense systems are being developed to thwart bad bots. The first legislation, the Better Online Ticket Sales (BOTS) Act passed in September 2016, was to deal with the aforementioned ticket-buying bots (though this continues to be a problem despite the legislation). An op-ed in Fortune earlier this year, calls for both private security implications and government intervention through creating or updating additional legislation that would levy heavy fines and penalties for those parties creating bad bots.

Some in the technology world are leading the charge against bad bots, including Twitter questioning 9.9 million accounts thought to be spam or bots, creating more sophisticated authentication procedures, and preventing an average of 50,000 spam/bot accounts a day from being set up.

Though bad bots are a problem and a threat to the marketplace, they should not overshadow the use of good bots to increase efficiency, improve systems, and analyze data in a variety of industries. Headlines scream that bots are bad, but, in reality, half of the bots out there are refining processes, allowing for further creativity, development, and increased revenue.

As Harley Davis, Vice President, France Lab and Decision Management, IBM Hybrid Cloud writes in a February blog post, “Businesses need solutions that assist in automation rather than simply fulfilling it, handle tasks intelligently and are highly autonomous. These solutions also must deliver customer-centric and personalized experiences, at enormous scale, without a massive back-end operation to prop them up.” The next generation of bots will not simply conduct mundane, repetitive tasks, they will be able to adapt as a company grows and changes, taking on each challenge intelligently. Being able to have a system that fluctuates as goals and needs change is crucial to progress and advancement as the marketplace transforms.

#CockyGate and the Perils of Trademark Bullying

Trademarks are among the most important ways creative professionals can protect their brand and ensure their fans can easily identity their work, as well as protect themselves from similar products from others. But trademark allocation brings up tough questions about what a reasonable trademark would consist in, and at what point trademarks are being used unfairly to stifle competition.

Since 2016, novelist Faleena Hopkins had been writing romance novels in her ‘cocky’ series, for example “Cocky Roomie” and “Cocky Biker”. She had written 19 books and sold 600,00 copies in this series, and so wanting to protect her brand, she decided to trademark “cocky” to keep copycat authors from riding on her coattails. When her trademark registration was issued in April 2018, she sent out notices to several authors with books with “cocky” in their title, informing them of the trademark violation and asked them to either change their titles or face legal action.

Initially, a few authors complied with her demands. Jamila Jasper had published a book titled “Cocky Cowboy” in March 2018, with had the same title as a book Hopkins had published in September 2016, and was one of the authors to receive a cease and desist letter. She shared a screenshot on Twitter:

She wrote on her blog that she decided to err on the side of caution and unpublish her book, and instead republish it after renaming it “The Cockiest Cowboy To Have Ever Cocked” and paying money to redesign its cover. Although she said she’s trying to remain optimistic, she admitted that “it hurts to be attacked and it hurts to have your integrity questioned”. She also argued that it is “exceedingly common” in the romance publishing industry to have similar, even identical titles, and that therefore was incredibly unfair to demand other authors take down books they’ve already published and to ask them to refrain from using “cocky”, an incredibly common descriptor in this genre.

The internet agreed with Jasper, and a massive online backlash was unleashed against Hopkins. Writers piled on in her social media accounts, with negative comments far outnumbering likes and retweets/shares. On facebook, she was inundated with comments from authors and readers, declaring that they were going to boycott her. On sites that allow for reviews like Goodreads and Amazon, her books were hit with negative reviews which explicitly referenced how she had targeted indie authors who didn’t have the resources to fight her in court.

Eventually, the Authors Guild and Romance Writers of America filed suit against her trademarking a common word, and won their challenge, with the judge ruling that Hopkins’ desired “preliminary injunction censoring the continued publication of various artistic works is unwarranted and unsupported”. With her then deciding to step down from her trademark battle, finally the #CockyGate saga came to an end.

While this particular incident might have ended well, it also reveals that the process through which trademarks are approved allows authors to overstep and try to trademark overly generic phrases, whether intentionally or otherwise. One innovative way to battle this is CockyBot, a twitter bot that automatically finds and tweets fiction-related trademark applications filed from the US Patent and Trademark Office’s database.

For each application, CockyBot tweets out the phrase being trademarked, the status of the application, the documents submitted, and an Amazon search link of products that might be related to the phrase.

While most of the applications seems acceptable, there are the occasional generic terms like “dragon slayer” and “big” also included. Clearly, not everyone has learnt from the Hopkins affair.

However, we need to remember that while the kind of behaviour Hopkins engaged in might be unacceptable, trademarks are an essential part of how creative professionals make it and survive. To ensure that such incidents don’t repeat, we need a way to ensure that authors can check for similar titles on the market, while not letting frivolous trademarks impede them.

A possible solution is technological. While CockyBot is certainly a step in the right direction, it still relies on human users having to look through the Amazon search list themselves to check if there are any products from other creators containing the word or phrase that’s being trademarked. What we need going forward is a way for authors to check whether the title they are planning to use in already in use, as well as whether it would be violating someone else’s legitimate trademark. 

As the number of books hitting the market increases, and new authors try their hand at writing for niche audiences, it is no longer possible for each person to be mentored individually and taught their way about the industry. Luckily, tech solutions like well-crafted automation can be of enormous help to these newcomers, helping them avoid pitfalls that they might not even imagine were problems.

The State of Automation - Part 3

During the past few weeks we have been looking at how automation may impact the book publishing industry in the future. In the previous post, we started exploring and analysing how many of the different roles within the publishing ecosystem could be affected by this phenomenon, revealing how upper management, HR, legal and financial positions will likely fare.

This week we turn our attention to some of the more traditional roles in publishing to understand what the future of working in the industry could be like.

Editorial: Most people who aspire to work in publishing and have a love for the written word often have their hearts set on editorial jobs. From discovering new talent to working with writers to refine their work, and from negotiating contracts to correcting manuscripts, editors are very much considered the heart and soul of a publishing house, and their roles are incredibly diverse and multi-faceted. But editorial responsibilities will probably be among those hit the hardest by automation.

Ever since Jodie Archer and Matthew L. Jocker famously released The Bestseller Code: Anatomy of the Blockbuster Novel, and came up with the Bestseller-ometer, the algorithm at the heart of the book’s thesis, much has been said about whether computers can do what was previously considered an incredibly “human” job, that of the commissioning editor.

Understanding complex emotions, what makes us tick, the journey we want a book to take us on and the characteristics which can ultimately make a book a success — these are the skills very much at the core of what commissioning editors do. The fact that big data algorithms have been developed, and machine learning based start-ups such as Intellogo and Archer and Jocker’s very own consultancy, Archers Jockers, have come into existence, show that this is an aspect of publishing which is ripe for automation. But will we see the role of the commissioning editor replaced? It’s highly doubtful. It’s more likely that the commissioning editors of the future incorporate AI tools into their role to assist them in uncovering and snapping up potential bestsellers, allowing them to focus on nurturing author relationships and managing other aspects of the book cycle.

Lower down the editorial chain of command is where automation will really take no prisoners. As workflow tools become increasingly sophisticated and integrate machine learning as the new normal, the need for copy editors and proof readers will become less, as the new technology will sift through manuscripts checking flow, sense, clarity consistency, grammar, and even facts. The editorial department of the future looks very different from what it is now, and those looking to enter publishing via the editorial route may find themselves training for a very completely different role.

Design: Despite design being considered among the most creative disciplines in publishing, there are various elements of graphic design, in particular, which are succumbing to automation. In this article by Rob Pearl, ominously entitled Automation threatens to make graphic designers obsolete, the author highlights his belief that much of the work designers do is already ‘prescriptive’ and being affected by automation. He goes on to discuss the work of designer Jon Gold, who is applying machine learning techniques to standard graphic design procedures and uses this approach to analyse typefaces and typographic trends, for example. Interestingly Gold’s pull-out quote states: “I’m building design tools that try to make designers better by learning about what they’re doing. Augmenting rather than replacing designers.” In publishing, where many companies traditionally opt for a particular house or brand style when it comes to book jackets, typefaces and marketing materials, the automation of many of the more procedural design processes could have an extremely positive impact on the role of the designer, freeing them up to focus on the more creative elements of their job. Designers becoming obsolete is not a likely outcome, certainly not in the short to mid-term future, however designers training or retraining to understand how to use the latest machine learning-driven tools at their disposal is a far more realistic consequence of automation.

Production: The publishing department we expect to be hardest hit by automation is the production department. While there will always be the need for production personnel to oversee the supply chain and bring books to market, it is probable that this area is deeply affected by automation and that junior production roles will be the most at risk. Workflow tools which incorporate machine learning are increasingly automating many key production tasks, such as formatting, layout, typesetting and proofing. They are also facilitating improved lines of communication between different departments, like design and editorial, another important aspect of the production role. In order to stay in the game we will inevitably see production staff becoming jacks-of-all-trades, equipping themselves with more technical skills, as well as being able to take on editorial and design tasks.

Marketing: There is no doubt that in most marketing circles the arrival of automation is considered a force for the good. Applications incorporating AI have flooded the marketplace and are already helping marketers in their day jobs, while enabling them to analyse data and trends more efficiently and become more impactful in their roles. In this article in Forbes by Andrew Stephen, head of marketing at Oxford’s Said Business School, we can see how marketing as an industry is adapting to this new reality and how digital literacy is now such an important currency for existing and aspiring marketers. In terms of how this plays out in publishing, AI can help deliver much greater and deeper understanding of consumers and readers, so those who empower their marketing departments and give them these valuable tools will inevitably be one step ahead.

The final post in this four-part series will examine what all this means for publishers, what the industry might look like in the future, and how publishers should consider equipping themselves for automation across the business.

Will Publishers who are Investing in Technology Be Better Prepared for the Future?

Earlier this month, Pearson announced that it had hired former Intel executive, Milena Marinova, to the new position of Senior Vice President for Artificial Intelligence (AI) Products and Solutions. As one of the first companies to create such a role, Pearson appears to be jumping headlong into finding ways to use these advances in machine learning and automation to better its business.

In The Bookseller article about Marinova’s appointment, “Marinova said there were untapped opportunities within education where it could draw on digital and advanced AI techniques to the benefit of teachers and learners.” Could it be that it requires someone from outside of the publishing industry to see what potential technology can provide to publishers?

This isn’t the first time a publisher has brought talent from outside the industry in-house to help with strategic development. Just among the Big Five, Chantal Restivo-Alessi, Chief Digital Officer at HarperCollins, worked in the music industry and banking before joining the ranks at HarperCollins; Nihar Malaviya, Chief Operating Officer at Penguin Random House, worked as a consultant for JP Morgan and directed Bertelsmann’s Business Development; Cara Chirichella, Senior Director of Digital Marketing and Technology at Macmillan, worked in customer engagement.

While some companies may invest in bringing talent from other industries in-house to provide outside perspective and skills, others rely on service providers who can create a tailored program or system to meet the publisher’s unique needs or goals.

At Pearson, Marinova will be focused on “exploring and applying existing and new technologies in artificial intelligence, machine learning, including deep and reinforcement learning, as well as data analytics and personalized learning into current and future products and services,” according to the announcement.

As the market changes and customer desires fluctuate, it is important for publishers to be agile and bring in the talent needed to address those changes. And with automation-driven technology destined to play a major role in all of our futures, in publishing and beyond, having the right people in place, with the right knowledge, might just make all the difference when it comes to future-proofing our businesses.

A Crisis in Discoverability and how we can move towards fixing it

Lacking a single central repository that collects information about scholarly papers from each discipline, it is somewhat hard to estimate the exact number of journals and papers that are published each year. A conservative estimate was generated by Lutz Bornmann and Ruediger Mutz in their 2014 paper Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references, where they track all material — papers, books, datasets, and even websites — cited between 1980 and 2012. From this, they plotted the data and found that the rate of scientific output increases by 8–9% every year, meaning there is a doubling of total output every nine years. (The dip in recent years can plausibly be chalked up to more recent papers simply not having had enough time to be cited)

Admittedly, this is an imperfect measure because it ignores all those sources that were never cited, as well as those simply no longer cited. Still, there is at least a prima facie case that there is a dramatic increase in the amount of research currently created.

And even this might be understating the actual amount of potentially valuable work produced. One academic estimates that every year 10,000 papers gets written within his discipline, which compete for around 2,000 spaces. Those whose papers are rejected don’t just give up, but keep trying to publish in other reputable sources, leading to a backlog which spikes rejection rates to 94%. Since it seems quite plausible that a substantial chunk of those not-published papers might actually be valuable and only missed out because of a lack of space, he advocates for “creating a lot more journal space (maybe 3 times as much as we have now) for the additional papers to be published”.

And this isn’t even taking into consideration the effect of the Open Access movement and the trend of sharing results directly on social media and the web, and how the lack of traditional gatekeepers will almost certainly increase how much content gets produced.

What these discussions mean for publishers is that there is going to be an increasing need for efficiently sifting through large quantities of research output, because if relevant work can be located, then it is immaterial how much more unrelated material is added. In other words, discoverability is going to become an increasingly pressing issue.

I speculate that two kinds of tech changes will be necessary if we are going to deal with this issue. The first is an increasingly fine-grained tagging of content that will permit researchers to conduct incredibly precise searches for the topic they’re interested in. This might mean, for example, that instead of settling for a handful of keywords along with the title and author information, books will have to offer chapter-level tagging to provide more metadata as well as more precise metadata.

But as the metadata requirements get more demanding, it will also become increasingly onerous for the traditional manual generation of relevant metadata. This will call for machine learning approaches to rapidly scan content and generate the relevant kinds of metadata, which can then simply be approved by a human counterpart. This isn’t going to be a simple requirement, because different kinds of data (photos, paragraphs, etc) will have quite different technical approaches, with some involving the clever manipulation of language rules, and others looking to image identification techniques. And different academic fields might require very different metadata, indicating that tech will have to pay close attention to the variety of demands instead of simply producing a generic, high-level solution.

The increase in scholarly output might seem intimidating, but I prefer to look at it more optimistically since it suggests that we have the good fortune to be living in a time where we are producing more knowledge than we know how to handle. With some clever technical fixes, we should be able to harness this increase in productivity across the board, and effortlessly navigate through these changing times.

The State of Automation - Part 2

Two weeks ago, we published the first blog in our The State of Automation series, which looked at what the experts are saying about automation and how it is likely to impact the job market, specifically putting the creative industries under the spotlight. This week we delve into how automation might affect those who work in the publishing industry, asking key questions such as: which roles could be most under threat? In what ways will automation likely help us or hinder us? And will it replace certain functions and tasks?

Many high-profile sources have proclaimed that the creative industries are among the safest sectors when it comes to the very real threat posed by automation. But that does not necessarily mean that we all have a hall pass and our jobs will be secure for life. The big differentiator here is that a “creative industries worker” is not the same thing as a “creative type”. While the latter will be far less likely to be replaced by bots, by contrast the former is just as likely as the next person to see their role affected by automation in the future in some shape or form.

Automation in publishing

The truth is we don’t know exactly how and when automation will transform certain aspects of publishing. In some areas it already has, such as the increased usage of Content Management Systems which provide simple formatting and publication. We can gaze into the crystal ball and speculate all we like, but technology evolves and accelerates at its own, often astounding, speed, and it can also be reined in and regulated in equal measure. However, what we do know is how innovations like machine learning are currently starting to be applied, and which kind of functions it is starting to assist and benefit on one hand, but supersede, replace and render superfluous, on the other.

Like any other industry, the work that goes on behind the production of a book, magazine, newspaper or journal involves a huge range of different types of people. The publishing ecosystem is made up of legal professionals, accountants, HR directors, marketing personnel, sales people, production and editorial staff, and C-level execs, in addition to those who originate the product (authors) and those who sell it (retailers). While publishing sets itself apart from many other industries in being very social and reliant on human-to-human dynamics and interactions, on the face of it, we are still looking at organisations like any in any other domain. So, let’s analyse how automation might impact key positions within a publishing house:

C-level and upper management: It might be easy to think that those at the top of the tree will remain largely unscathed by automation — these are the decision-makers whose leadership we rely on to run a company, after all. However, a report in the Harvard Business Review in 2016 stated that managers spend 54 per cent of their time on administrative tasks. Many of the managers surveyed welcomed AI as a means of reducing their administrative workload in return for more time spent on “judgement work”, strategic thinking and building deep social skills and networks. Although automation is likely to help managers cut out daily tasks considered below their pay grades, it may also lead to the consolidation of managerial roles, for example an organisation may not consider it necessary to continue employing COOs, COIs, CFOS, SVPs and MDs if the CEO is able to take a more active role.

HR: If there is one department within an organisational structure where the human element reigns supreme, it’s human resources. Jobs in HR will be hard to automate, yet it’s predicted that technological developments, particularly around AI, will end up benefiting the profession a great deal in the long run. With tech giants such as Slack already developing HR-dedicated Siri-esque chatbots to handle many of the more mundane daily employee queries, platforms such as Job Market Maker and Entelo providing ever more sophisticated ways of managing talent acquisition, and training and development increasingly moving into the digital sphere, the HR role will undoubtedly be changed for the better by AI…which will give them more time to focus on any organisational fallout generated by automation.

Legal/rights: Technology has long been eating into what were once considered core legal tasks. Interestingly, a study by Duke Law and Stanford Law School recently found that AI software was able to deliver a 94 per cent accuracy rate when reviewing legal documents, compared to 85 per cent by human lawyers. AI techniques such as natural language processing have already started to provide a great deal of assistance to those in the profession and increasingly AI contracting software is being used to help process more routine contracts. As due diligence and contract work becomes more automated, legal professionals are having to focus more on assessing risk and providing counsel, areas which are yet to be impacted by automation. Another development worth watching, particularly for rights professionals, is Microsoft’s new rights and royalties blockchain platform, EY, which, when it rolls out later this year, is rumoured to be a game-changer for managing complex digital rights and royalties transactions. Whether this becomes a force for good in publishing, a job threat, or both, remains to be seen.

Financial: While the financial industry itself is consistently earmarked among the top three sectors to be impacted by automation, finance jobs within publishing are less likely to be affected for the foreseeable future. Research by Bloomberg concluded that financial managers and advisors are among the lowest risk group in the sector. Meanwhile it is expected that roles in accountancy and bookkeeping will become enhanced and will evolve to incorporate aspects of automation which make the role less open to human error.

In our next The State of Automation post we will analyse how automation may affect other roles within the publishing arena, including editorial, production, sales and marketing positions. Watch this space!

Why Publishers today can’t do without Version Control: A Primer

Philosopher Daniel Dennett once wrote that “There is no such thing as philosophy-free science, just science that has been conducted without any consideration of its underlying philosophical assumptions.” Something analogous is true for version control systems and any collaborative work place—the question isn’t whether you use one, it’s about how considered and efficient it is.

This might seem like an odd claim, but consider that a version control system is simply the process through which changes to documents are managed. So if you manage your files through imaginative names like “Meeting Report Draft”, “Meeting Report Final”, “Meeting Report Final FINAL”, you are already employing crude version control.

Of course, there are downsides to an informal system like this. Over time you are bound to mix up files, and will have to manually trawl through various folders trying to locate what you are looking for, hoping your past self didn’t use something obscure for the document title. Systematic version control stores all files in a single, easily accessible repository and timestamps each file for easy searches. The use of Word2vec can make this even easier, helping embed timestamped information in image metadata, letting you track down a version any time.

With the use of each file assiduously tracked, collaboration becomes easy because it can be verified that a certain file has been worked on by whoever is in the prior stage of the workflow. The ability to lock a file in use and grant control over when the file becomes editable for someone else ensures that no one starts working on a file before all the work that needs to get done actually gets done. Of course, system administrators can override these locks in case the person who locked the file suddenly gets otherwise occupied. Importantly, the lock controls who can edit the file, but it can still be viewed and downloaded at any time.

Past versions don’t get discarded, but are all stored in the server. This way, they can be referred to in case there are issues or questions, and in case work on a particular version renders it unfixable, previous versions can quickly be switched to and treated as the latest version. This creates a sandbox to try out new ideas without being forced to abide by a tentative choice that was being tried out.

Finally, the fact that multiple versions are stored on the server allows for an incredibly fine-grained catalogue of track changes that documents what the changes are, who made the changes, and when. Additionally, comments about why, when relevant, can be added. This is particularly useful for production processes that are many-layered, because linking comments to a certain set of edits goes a long way in making the system intelligible across the board.

For many publishers, their production process tends to be both intimate and informal, with many of their standard processes having been shaped by the culmination of decisions made over years, sometimes decades. These decisions often reflect the vast amount of experience they gather in the industry, and made with certain real issues in mind. This means sometimes the way they work may feel personalized, like tradition.

The flip side to this is that sometimes there might be some inertia against switching to new systems and new ways or working, because why change something that isn’t broken? Still, I think it’s good to step back once in a while, and ask whether the way you are working is actually logical and whether a simple fix could make things much easier. I think incorporating an official system for version control is one of those things that offer huge pay-outs in productivity and efficiency, while changing little about the way people actually work.

Digital Clutter: How Does Machine Learning Make Things Easier?

Today, in an effort to avoid physical clutter and piles of paper, we tend to scan in old documents, do everything by email, and store everything in the cloud.  But, we are setting ourselves up for a different kind of problem.  With a stack of paper, you could easily see immediately what each file is and discard those that are no longer important.  With digital files, you often don’t even take note of how much data is piling up—files, photographs, apps—until it is too late.

The Dangers of Digital Clutter

Lack of organization:  With digital clutter, there is no physical pile of papers to signal a time to go through them, so computer files and apps tend to pile up, forcing users to spend hours upon hours sifting through folders and files to find what they’re looking for.  According to a McKinsey report in 2012, workers spend on average 1.8 hours every day searching for information and it has only gotten worse in the last 6 years.

Security breaches:  Not knowing which apps or documents have personal information on them can open users up to hacking or possible misuse of sensitive information.  As the Facebook information breach shows, users are open to having their personal data analyzed and used at all times, without even knowing it. 

System slowdowns:  Having a lot of memory eaten up by unwanted files can cause a slowdown on your system, making opening and closing of files, web pages, and processes take much longer.

How Workflow Solutions and Machine Learning Can Help

Keeping You on Track and Eliminating Unnecessary Files:  Some workflow systems, including our product suite, allow for helpful reminders to bring users back to a file that needs attention, plus eliminates unnecessary files by having all team members working off one version of a document.  That reduces the number of versions on each team member’s computer and, thus, eliminates unwanted documents.

Organize and categorize photos: Google’s Cloud Vision API is among those tools using machine learning to analyze photos, categorize them, and organize them, making finding images easier both for work or for personal use. 

Erase Personal Data on Apps:  While users may want to retain personal information on their phones and computers, there are services, such as CompleteWipe that can go through and systematically delete all personal information.

Automatically File and Delete Unwanted Emails:  Though there doesn’t appear to be a system yet that deletes unwanted files after they become useless, there are plenty of systems that help reduce unwanted apps and emails.   For emails, ActiveInbox, helps turn each email into an action item with a due date and then puts each in a folder.  Though it requires a more active way of addressing email, it does pay off in time saved.

As we build up more and more digital clutter, the use of machine learning tools can quickly and easily help us manage this tsunami of data that will only continue to grow and overwhelm us.    

The Games of Alan Turing

Are we asking the wrong questions about AI?

There's no lack of discussion about whether machines can be conscious and whether they can undertake all that is distinctly human. But these tend to centre around the relatively narrow question of their computational capabilities, obscuring important aspects of how we think about consciousness.


Let's begin with Alan Turing's seminal paper, Computing Machinery and Intelligence, where he proposes we replace the more abstract question of "Can machines think?" with a clever thought experiment called The Imitation Game, now more popularly known as the Turing Test. According to this, an interrogator is allowed to ask questions to someone in another room using only a typewriter. The interrogator is allowed to ask whatever question he wants, and he receives responses from the person in the other room. According to Turing, instead of wondering in the abstract whether machines are capable of thought, a sufficient condition for a machine being able to think would be a digital computer's possession of the ability to answer the interrogator's questions well enough to fool the interrogator into thinking it is human.

Turing gives examples of how exchanges in this game could occur:

Q: Please write me a sonnet on the subject of the Forth Bridge.
A : Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.

He argues that this set-up is valuable because it is "suitable for introducing almost any of the fields of human endeavour that we wish to include".
This aspect of the test is important to note because the stringency of the requirement is often not taken too seriously. For example, the recent unveiling of Google Duplex, Google Assistant's newest feature that automatically sets up appointments for its users, was met with excited headlines like Did Google's Duplex AI Demo Just Pass the Turing Test?. While the system certainly seems competent with respect to its narrow goal, it does not come close to capturing the massive variability and depth of human communication, and so obviously fails the Turing Test.


Turing's paper came out in 1950, and he hoped that within a century, it would be commonsensical that machines could think:

I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted

While the century hasn't run out just yet, this transformation in the way we think hasn't quite come to pass. One reason for this are a class of arguments Turning termed "Arguments from Various Disabilities", which argue that even if certain human capabilities could be carried out by machines, it takes more than that to actually think or be conscious. There will always be certain things they wouldn't be able to do, including:

Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.

Turing's own response to these was that they were a result of faulty scientific induction. According to him, people had just been exposed to a small range of machines with limited capabilities, and had made sweeping and unwarranted assumptions about the limitations of all machines based on these. This is almost certainly right, but here Turing fails to develop a line of inquiry which I believe is vital to understanding the force of this objection.


Turing was sensitive to the fact that an adult mind doesn't leap into existence from nowhere. He points out that along with the mind at birth, there is much that is taught and experienced which eventually shapes how the adult mind functiona. But here Turing doesn't go far enough in seeing how dependent consciousness is on other people. After all, his way of talking about learning and experiencing treats machines as purely cerebral and solipsistic.

As Abeba Birhane explains in a recent Aeon article titled Descartes was wrong: 'a person is a person through other persons', there are aspects of human identity and being which are irreducibly relational. The presence of others and paying attention to their perspectives (both actual and imagined) play crucial roles in how humans develop a sense of self and function in the world.

I'm not suggesting Turing necessarily missed this, after all a key reason for his development of the Imitation Game was to produce a stripped-down test which would not require much background information. But by not exploring questions about the nature of the self in his paper, Turing inadvertently kicked off a research programme which centered questions about the capabilities of machines in isolation, and to this day this colours the way we think about AI. To move past this, we'll have to face head-on those possibilities where machines develop their capabilities over time, through interactions with humans and each other, all while being able to run computations much faster than we ever could.

I suspect that doing this will force us to confront the very plausible scenario of our oncoming obsolescence. It's tempting to pretend this isn't a serious issue, that it can never come about, but to echo Turing, I think "consolation would be more appropriate".

The State of Automation - Part 1

Automation and its impact on the job market, our livelihood and our way of life has been a hot topic for several years now. Seemingly every management consultancy, recruitment firm, IT company, think tank and government body in the world has at some point weighed in and released a study or white paper projecting the future impact of automation and all the doom and gloom that comes with it. 

We’ve seen research from leading IT analysts Gartner and Forrester, consultancies and auditors such as McKinsey and PwC, as well as renowned global economic organisations such as the OECD and the World Economic Forum (WEF) - all throwing their sizeable hats into the automation ring.

What the experts say

Each study has attempted to paint a picture of what the short and long-term future will look like; from analysing which social groups are most at risk to highlighting which jobs are most likely to become obsolete, from calculating how many of us will suffer to capturing the general public’s fears when it comes to automation.

Much of the research seems to conclude that certain jobs will become more at risk than others, highlighting those in the financial and manufacturing sectors as the most under threat. And it would appear that low-skilled workers and young people with entry level roles are the most at risk from automation, validating Martin Ford’s theory that those whose jobs “are on some level routine, repetitive and predictable” will likely feel the pinch. 

The OECD goes as far as predicting that automation will create more divisions in society between the educated classes and working classes, the high skilled and low skilled worker and the rich and the poor.

To believe or not to believe, that is not the question

Varying wildly in their prognoses on a scale of conservative to devastating, barely any of the research we’ve seen to date can be corroborated or supported by parallel studies, which points to a rather confusing landscape. Do we actually know how AI, robotics and other forms of automation will affect us in five,10 or 20 years? Apparently not, is the one main takeaway to be gleaned from all of this. 

But that is not to say that we should just dismiss all this heavyweight research as tedious scaremongering. After all, the fact that the research is being conducted in the first place speaks volumes. What we do know, is that to some extent and at some point, within the years to come, automation will touch our lives, and this could be in a positive or negative way depending on a variety of geographical and socio-economic factors. It’s now up to us to speculate as to how our roles might evolve over time and how we choose to be prepared for the possible, probable or inevitable.

Impact on the creative industries

Those who work in the creative industries are often cited as one of the low risk groups, who, alongside healthcare and science professionals, are less likely to see their roles disrupted or destroyed by automation. 

In 2015, Hasan Bakshi from UK non-profit Nesta claimed that “creativity is one of the three classic bottlenecks to automating work” and that “tasks which involve a high degree of human manipulation and human perception – subtle tasks – other things being equal will be more difficult to automate.

Within the creative industries, including publishing, these kinds of hypotheses have triggered the common and widespread view that we are all somehow exempt from automation, and that the craft and humanistic qualities of our work will shield us from the dangerous and entangling tentacles of automation.  

But this couldn’t be further from the truth. 

Over the next few weeks in this State of Automation series, we will examine how automation, particularly AI, will likely affect the publishing industry. We will look at the roles which are most and least at risk and discuss how the industry could potentially evolve to be better equipped to embrace forthcoming innovation.