Is it time to open up Peer Review?

Peer review is arguably the keystone of academic publishing, with reviewers serving as gate keepers of legitimacy and tasked with ensuring that standards are maintained and trust in the field is sustained. It is also, for the most part, a thankless job. This might be about to change.

The practice of peer review probably kicked off when Henry Oldenburg, Secretary of the Royal Society in the mid-17th century, sent out manuscripts to experts for vetting before publication. Since then, peer review has gotten more institutionalized, but the form itself has been remarkably stable: an editor sends out a manuscript to a handful of experts in the relevant sub-domain, and if the experts green-light it as a valuable academic contribution, it is published.

Anonymity is crucial to how this system works. The peer reviewer is unnamed so that they can offer honest evaluations about the quality of a submission without fear of retaliation, particularly if the author is someone with clout. The identities of authors are also kept anonymous, so reviewer judgments aren’t influenced by personal relationships, both warm and cold. Of course, academics present their work at conferences and some sub-fields are so small that everyone knows what everyone else is working on, but for the most part secrecy is taken seriously and respected for what it makes possible.

As of late, an increasing chorus of scholars are questioning whether this accepted wisdom about the importance of anonymity in peer review is actually as wise as it might initially appear. For example, Caroline Schaffalitzky de Muckadell and Esben Nedenskov Petersen argue both that papers accepted for publication should be published along with their peer reviews, and that the reviews published “should include not only reviews from the journal accepting the paper, but also previous reviews which resulted in rejections from other journals”.

It should be conceded that this is a fascinating proposal for a number of reasons because earlier peer reviews and corrections are also vital parts of how academia works, and to simply sweep them under the rug makes this whole aspect opaque and obscure. Plus, peer reviewing is notoriously arbitrary, as is captured well by the meme of the capriciously cruel “#Reviewer2”. John Turri from the University of Waterloo who argues that maintaining anonymity keeps academic disciplines from developing open norms about what is worth publishing, frequently leading to frustration when a submission is rejected on seemingly arbitrary grounds. If we want honest discussions about the state of a field, we need to be able to see what standards are actually being employed to determine what gets published in its journals.

As great as these arguments are, a worry is that the persistence of peer review anonymity possibly undercuts all the possible benefits they advocate for. For example, one scenario de Muckadell and Petersen are keen to avoid is that of unqualified or abusive reviewing. According to this, people know their reviews are going to be made public, so they’ll take care to not review abusively. But if reviewers know they’ll remain anonymous regardless of how abusive their reviews are, it isn’t clear why they would be motivated to change their behaviour. So although the authors might still succeed in their aim to “put forward [reviewers’] arguments for public scrutiny”, this might not be quite enough to elevate the quality of reviews or decrease the incidence of abuse.

Considerations like these have led to a more radical proposal — remove peer review anonymity.

At first encounter, this might seem like a dangerous suggestion. After all, how can early career academics, especially those with vulnerable employment, critique honestly and even harshly, when it is called for? But if we look past this first reaction, two strong arguments can be found for this position.

For one, there finally will be motivation for people to write more responsibly, since they can claim credit for well-written and well-argued review but obviously cannot for abusive ones. While it is true that a reviewer who has no interest at all in using peer review history as academic credentials might still continue being abusive, this should help better things significantly.

Second, it should be pointed out that peer review still is academic work, even essential academic work. As Justin Weinberg from the University of South Carolina points out, “my sense is that the credit one gets for peer reviewing is disproportionately small compared to how important peer reviewing is for the academic enterprise”. To give people the ability to take credit for work well-done then is a matter of fairness.

An interesting model for how this might be done, without stripping anonymity from reviewers without their permission is the website Publons. This collects information on a voluntary basis from peer reviewers and verifies this with the journal publisher. This allows for the creation of reviewer profiles that each reviewer can claim claim credit for and add to their CVs.

With any solution there are those skeptical. David Roy Smith from Western admitted that as an early career academic, he simply hadn’t had the opportunity to review many papers, especially from prestigious journals, and so he wasn’t all that eager to sign-up. In addition, there’s the perpetually relevant question about whether the endless march to quantifying and comparing work done and its impact is actually good for academia.

Still, the removal of anonymity in peer review, voluntary for now, seems to be the direction we’re travelling and so we need to take it seriously. Universities and funding organizations need to incorporate the now-public data about peer reviews performed into their decision making, and choose whether people without publicly recorded reviews will be penalized or not. Publishing and publishing tech need to incorporate the ability to transfer and approve finished peer reviewers quickly to standard sites, so workflows don’t get cluttered.

The opening up of Peer Review is bound to be a momentous transformation in what is now a procedure that hasn’t changed much in centuries, so who knows where we’ll end up.

A Survey on Workflow and Automation

Over the last year since PageMajik launched, we have spoken to hundreds of publishers about their workflow challenges learning about: how much time they spend on repetitive tasks, how this impacts on time management, and what are the main barriers preventing them from launching a product into the marketplace in a timely fashion.  What we have found in our conversations is that, more often than not, there are old, legacy systems in place that are greatly hindering the efficiency and potential revenue for publishers. And, when new modules are created, these are based on old technology and don't adapt to new innovations that are being utilized in other industries.

With that in mind, we are delighted to announce that we have partnered with the Book Industry Study Group (BISG) to expand our conversations to a larger scale to gain an even better understanding of the challenges publishers face and how these business critical workflow issues can be resolved.  BISG and PageMajik have put together this survey for publishers of all sizes to identify trouble areas in the workflow, to highlight where technology might be vital, to gauge attitudes towards automation and to reveal how publishers feel automation might be of benefit in their role.

To participate, please click here and share your experiences.

How to keep publishing tech from “Locking-in” academics

Scholarly publishing has recently been beset by fears that large publishing companies are creating end-to-end publishing platforms that would unfairly create dependence and entrench monopolies, by providing services to academics which become the standard. Smaller publishers unable to afford the kinds of acquisition of the behemoths will simply be powerless to compete with the ease and efficiency of, say, the submission system for both academics and publishers. As more of the market is captured by the large publishers, they then get more power over the terms of contracts, prices, etc.

The presence of different-sized competitors in a marketplace always raises concerns about the sustainability of the smaller players. After all, their bigger counterparts have more resources to pour into R&D to create more efficient tech and processes, thus giving them more of a market advantage, helping them get even bigger. In cycles which can be considered virtuous or vicious, depending on your standpoint, bigger organizations always threaten the tenability of the smaller ones. The reason we can’t just resign ourselves to this dynamic is that the monopolization of the field by a smaller and smaller number of competitors allows the survivors to dominate the field, allowing them to effectively unilaterally fix the terms of contracts everyone else is forced to abide by.

The usual response to domination by large publishers is to advocate for open science, but Open Access can’t really help here because OA publishers need to be competitive as much as anyone else. If the most effective tools are found exclusively in journals by large for-profit publishers, then even researchers who might otherwise be sympathetic to OA initiatives might opt to publish elsewhere.

But unlike the standard case of “Big Deal” packaging by large publishers, there is one crucial difference in the case of workflow management solutions — competition is possible. Unlike access to journal articles which publishers control, there are plenty of tech companies already dedicated to producing solutions for publishers. Instead of naively hoping that large publishers abstain from engaging in an arms race over tech, we can slightly less naively hope to maintain best-practice guidelines that tech companies are required to abide by. These could include:

  1. Keeping your entire system as modular as possible, so the system architecture can’t by itself straitjacket libraries into an all-or-nothing choice.
  2. Proprietary file formats are kept to a minimum to allow hassle-free disengagement from the system if required.
  3. Different bands of pricing options to ensure small and medium-sized publishers can access atleast a bare bones system.

Why would publishing tech agree to this instead? For one, and most idealistically, most people in publishing tech are themselves invested in the health of academia. But for the few who might need a little motivation, the scholarly community at large in the age of social media can make clear what tech it will be willing to work with. Soft pressure and shaming, especially in the age of social media might just be enough for such an ambitious endeavour.

Of course, we can’t be sure such a tactic would even be feasible or effective, particularly given that larger publishers can simply acquire tech companies. But given where we are today, ensuring that publishing tech is willing to help resist publishing monopolies might very well be our best shot at keeping the marketplace competitive. This isn’t going to make resource and size disparities disappear, but it just might ensure that everyone plays by the same rules.

Humans are Afraid of AI, but Why?

In this blog, we write a lot about the future of publishing with the introduction of machine learning and artificial intelligence to help automate repetitive tasks and make workflow more efficient. We also highlight that there is still resistance in the industry, and the world at large, for embracing technology due to fears about machines taking human jobs, but what is at the heart of that fear? And should we give into it by regulating how much we implement machine learning into our workflows?

In a recent article in Fast Company on the need for AI, writer Robert Safian shared a colleague’s mantra, “Everything in an organization that can be done by machines should be done by machines — efficiency dictates it. But everything that needs to be done by humans must be done by humans. The defining characteristics of an enterprise — those involving ethics, judgment, creativity, and compassion — require a human touch.” Using as an example of an instance in which a human touch was needed, the article highlighted the recent decision by Nike to feature controversial NFL star Colin Kaepernick in its “Just do it” campaign. At the center of debate that has extended beyond the National Football League and its fans into the very center of US power — The White House, Kaepernick, on paper, does not seem like a logical choice for a spokesperson. A machine would never have selected him from a list of choices. But, as a representative of what Nike stands for “to bring inspiration and innovation to every athlete,” Kaepernick, who stood up for something he believed in and sacrificed his career, fit perfectly. Only a human could see the potential and only a human could have made that decision.

A Pew Research study conducted last autumn showed that 72% of respondents are worried about a future in which machines are able to do many jobs currently held by humans. The study went on to outline how humans want to place restrictions on how and when and how much machines are involved in an organization, “in the event that robots and computers become capable of doing many human jobs, for example, 85% of Americans are in favor of limiting machines to performing primarily those jobs that are dangerous or unhealthy for humans.”  In addition, respondents were in favor of putting restrictions on how many jobs a company could replace with machines, still giving jobs to humans even if a machine are capable of doing them faster, or providing guaranteed pay for humans, even if a machine was doing the work. 

It’s clear from these results that humans are concerned about machines coming into the workplace because they will take their jobs, but we are ignoring the second part of that concern. Humans are afraid of adapting. Whether that means adapting from a system they are comfortable with or to a world in which they must become more creative, focus on the bigger picture, which may require more focused thinking and energy, is unclear. Machines offer the opportunity to stop doing mundane tasks and embrace more creative, thoughtful pursuits and ideas. Why are humans afraid of that? We’d love to hear your thoughts on the issue.

A Week in the Life of Blockchain

It’s hard to ignore the omnipresent buzz surrounding Blockchain. It is mentioned in every media outlet we consume, it infiltrates seminars and conferences we attend for work, it’s a constant on everybody’s social media feeds and it pops up in conversation all too often.

And as the noise around blockchain increases to almost deafening levels, so too does the polarity between blockchain’s opposing factions, with evangelists and naysayers alike shouting ever louder. If we take a look at just a small selection of articles which have appeared in the media over the last few weeks alone we can observe how a remarkably conflicted and jarring landscape is starting to develop.

Blockchain is “useless”

This week at the Blockshow Conference in Las Vegas, economist and renowned crypto-critic Nouriel Roubini, who was dubbed Dr Doom after he predicted the global economic crisis in 2008 stated “blockchain is probably one of the most overhyped technologies ever, with the amount of hype vastly exceeding what are going to be the applications of it.” In a follow up interview with Forbes, he went as far as to say “It’s useless technology and will never go anywhere because of the proof of stake and scalability issues. No matter what, this is not going to become another benchmark because it is just too slow.”

Scalability and speed are concerns echoed by Daniel Newman in his article entitled Don’t believe the hype: understanding blockchain’s limits, who also adds trust and security into the mix of stumbling blocks which are dragging blockchain out of its “honeymoon phase”, as he puts it.

Downplaying growth

Meanwhile a cluster of reputable IT analysts have published reports in recent weeks which bust the myth of widespread blockchain adoption and roll-outs. Gartner’s 2018 CIO survey claimed that just one per cent of the CIOs who took part in the survey indicated any kind of blockchain adoption, with 77 per cent claiming they had no interest in the technology or any planned action to investigate or develop it. The firm also claimed that the technology is entering a “trough of disillusionment” phase as interest in blockchain “wanes”.

Backing this up, Forrester also released a report which estimates that 90 per cent of active blockchain projects will either be put on hold or abandoned altogether.

The other side of the (bit)coin

Despite all this, in the media, the balance unequivocally tips firmly in the opposite direction, as international governments, financial companies, tech firms and many others, who are eager to be seen as a cut above the rest and skyrocket their share prices, seek to promote their adoption of blockchain technologies.

Take this week alone as an example, we’ve seen:

· The World Bank launching its first blockchain bond

· Russian state pension fund announcing plans to deploy blockchain tech

· The OECD announcing a new Blockchain Policy Forum event

· China launching its new Blockchain Lab initiative

And this is merely a miniscule sample of the vast cacophony surrounding blockchain on any given week.

Just to build up the hype even further, PwC this week published its 2018 Global Blockchain Survey, which found that an astonishing 84 per cent of executive interviewed said their companies are “actively involved” with blockchain technology, research which confusingly paints a completely different picture from the studies published by the IT analysts.

Staying grounded

As with many exciting and innovative technologies, everybody wants to jump on the bandwagon and find a way to apply it and make it work for their business. Some will find that once the initial excitement recedes, a project is deemed too ambitious or that there are too many barriers rendering it difficult to get off the ground, whereas others, often with more realistic applications, will succeed and transform the way they work.

With blockchain we are at a pivotal phase whereby companies need to understand exactly how the technology can fit into their work cycle and be of benefit to them. As Varun Mayya, CEO of Avalon Labs says: “The good news is that good projects will continue to survive and authentic ones will continue to reap the benefits of both blockchain and smart contracts.” I am delighted to be part of one such organisation recognised last week by Forbes as one of the companies using blockchain technologies to help transform the publishing industry and improve education.

Re-inventing the Research Text

There’s been a sustained conversation for a while now about how tech will impact the ways research is produced, read, and propagated. With the advent of complex digital books, for example, researchers will finally be able to store the wealth of raw data and sources they collected during fieldwork, and make it immediately accessible to anyone who wants more information, instead of forcing them to go online and digging through files.

But innovations like this take the book itself (as it currently exists) for granted. Even though it doesn’t quite strike us so in our everyday lives, the book is a profoundly unnatural way of presenting information to others. It requires all the relevant information, regardless of subject, complexity, and source type, to fit within a linear text of typically a few hundred pages. A fascinating question to consider is how the reading experience could change if we were willing to alter the book’s linearity itself.

Consider for example the set of texts that proceed axiomatically, that is, by building an elaborate deductive system from a set of basic assumptions. I have in mind works like Newton’s Principia, Wittgenstein’s Tractatus, and Spinoza’s Ethics. I can’t speak for the authors, but for most of us who attempt to read these today, understanding what’s being said usually means frantically flipping back to the various theorems proven before in order to put them together in a way that makes the later theorems intelligible. The biggest hurdle to faster learning here is the linearity our current books impose on us. Smart ebooks could change this, and there are already some indications of how this can be done away with.

A PhD student at Boston College, John Bagby, created visualizations of the entirety of Spinoza’s Ethics, with each node representing a proposition.

Clicking on a node reveals its connections to other nodes, and also brings up a dialogue box which will state all the relevant propositions (the one selected, the parent and children propositions). Just like that, the linearity that was taken to be constitutive of our reading experience for centuries is shown to be a constraint, and the visualization makes the connections far easier to pursue. That isn’t to say that reading Spinoza becomes easy, but it’s undeniable that this would make the text tremendously more accessible for both beginners attempting to read it as well as for experienced researchers hunting down some obscure subtlety.

As ground-breaking as this is, an obvious drawback is that very few books lend themselves to be transformed in this particular manner. But we shouldn’t be too quick to dismiss its relevance. For far too long we’ve been asking ourselves what the next big idea will be. Perhaps it is time to acknowledge that the future isn’t about a single all-encompassing idea but many ideas, pushing in many different directions. For such a future, however, tech companies will have to stop thinking in terms of delivering a single, clear-cut solution, and instead think in terms of platforms capacious enough to allow different authors, designers, and publishers push the envelope in their own ways, on their own terms.

Moving away from the "stupid" e-book: An opinionated survey of our options

Earlier this year, Hachette CEO Arnaud Nourry’s remarked that the ebook was a stupid product, since "it is exactly the same as print, except it’s electronic. There is no creativity, no enhancement, no real digital experience." While shocking in its honesty, it also prompts the obvious question: what would a non-stupid ebook look like?

When contemplating how technology can alter the future, there are two risks to look out for. The first is the false positive, where we fantasize recklessly about tech which actually isn’t the revolutionary game-changer it is imagined to be. The second is the false negative, where we are insufficiently sensitive to the potential of something before us. And that’s not even taking seriously the role of sheer luck in making or breaking a product. Still, speculate we must, and so we might as well do it with full self-awareness about the risks undertaken. So what could the next wave of ebooks consist in?

Custom Books

One obvious-seeming answer is to point to personalization. While we might even one day have tech capabilities for this, I’m still quite skeptical about how popular this would be.

For one, we already have some idea of what personalization could look like. Companies already provide services where they insert names into fixed slots in books, allowing you, or anyone you choose, to be the protagonist of the story. An intriguing idea, but also one that strikes me as a gimmick which anyone would tire of fast. Admittedly there’s some more space for children’s books to innovate in this regard, for example how “Put me in the story” incorporates photos of kids in the books they read, but again I’m not sure if the trend can outlast the novelty factor.

Interactive books which draw from video games, where the reader has to choose how the plot proceeds and what the character should do, will also be possible. But we already have video games, and plus if I wanted to “do things”, I would just go outside. Unless books can somehow deliver on adventure that the cutting edge video gaming industry cannot, this sort of personalization will be unlikely to gain much purchase in the market either.

Perhaps the most radical possibility is that of books custom written for an individual based on interests and favorite genres. With the wealth of information about ourselves we store online, anyone brave enough to give access to a publisher might be able to get a book version specifically written for them! I can conceive of this taking off, but even here I suspect all might not be well. A large part of the book reading experience apart from the actual reading consists in listening to others talk about it, talking about it online and in person with friends, reviewing it and reading the reviews of others, and above all arguing over minute details with others who love/hate it just as passionately. In other words, there are social aspects and rituals predicated on all of us reading the same book, which would be lost if all of us were reading different versions. So even if this kind of personalization were possible, our shared culture of reading might have to change considerably, and not necessarily in a positive way.

Interactive Books

A far more promising approach is the incorporation of multi-media in books, that can include audio, video, gifs, maps, AR, and VR. The application in travel guides and books on far away places is obvious, and I can’t wait to use books that let me access how various locations actually look before booking a vacation, or perhaps even more importantly, to give a sense of distant places to those who aren’t able to make it there just yet. And children, who’ve shown themselves quite susceptible to the charms of youtube, will probably be delighted at having their dull school exercise books being guided by Dora the explorer (or someone else less likely to violate copyright).

An unexplored avenue for multimedia is how other genres might find surprising potential. In high fantasy, for example, it is common for maps to be provided at the beginning of the book, and have characters traverse it during the story. To be able to explore these maps immersively while reading, to get a sense of how the journey proceeds, could enhance the experience significantly (and I might have spent far less time flipping back and squinting at Tolkien maps as a teenager).

The desire for a multi-media experience isn’t restricted to children, of course. When the distinguished philosopher G. A. Cohen delivered his Valedictory Lecture at the age of 67, he sent his colleagues a CD recording along with the text of the lecture itself, with a note saying, “please don’t read the text except when listening to the CD, because the text is much less funny unspoken.” And who knows what other applications might be found?

As promising as these enhancements are, some caution is in order. Ever since Our Choice, Al Gore’s “first feature-length interactive book” from 2001, there have been predictions about the rise of the interactive book, and these have failed to materialize. What this shows us, I think, is that while there is definitely space for enhancing the reading experience, I don’t think readers necessarily want the core experience itself transformed. As fun as map immersion would be, when it comes to the reading itself I still want uninterrupted text, with the enhancements brought up only when desired, and typically desired rarely. For all the talk about change, I can’t really imagine giving up the experience of sustained reading itself.

The fully interactive text then looks like a false positive, something that seems like an obvious game changer, which instead fizzled out. The ability to rotate a windmill in Our Choice by just blowing on the screen, while a cool party trick, has very little use for readers. And having videos disrupt reading is distracting, especially after the novelty wears off.

But what I wouldn’t dream of suggesting is that the eBook, as it is, is the insurmountable pinnacle of innovation. Nourry is right, the current ebook really is stupid! But at least part of the reason for this languishing is that we’ve been a little too taken with tech capabilities instead of asking whether readers would actually find their experience made better over the long-term. What publishing needs is a tech philosophy which doesn’t allow current reader preferences limit change, but also one which pays attention to where readers actually are with regard to their habits and needs. Luckily, the now burgeoning industry of publishing-specific tech might mean we could have a truly smart eBook sooner than anyone might suspect.

Why Rights and Licensing Automation is Essential to a Publisher's Bottom Line

An Interview with Jane Tappuni, General Manager, IPR License

The rights department is not an area in which publishers tend to invest, and yet, it’s one of the key areas of the industry with untapped revenue opportunities. With most rights deals still handled via paper contracts and one-to-one communication between editors and rights holders, it can be a slow process. Furthermore, it’s hard for publishers to have an accurate accounting of what rights they hold (and sometimes when a license runs out or rights revert to another party), how to monetize those rights against current market trends, and even more difficult to generate a quick deal in order to free up time for more complicated rights deals that may require more thoughtful consideration.

Enter technology. In a blog post earlier this year, we discussed rights deals, smart contracts, and illustrated how we thought they might be useful to publishers; “For publishers, the world of contracts unfortunately continues to be predominantly ruled by paper, creating a lag in transactional payment and royalty collection. But, that doesn’t have to be the case going forward.”

By automating systems in the rights department, using tools which generate smart contracts that can be resolved and signed in a matter of moments, a publisher can not only increase their revenue but also have a better understanding of the marketplace to make better acquisitions in the future. So, why are publishers so hesitant to adopt technology into the rights department?

Jane Tappuni, an expert on the frontlines of the rights and licensing industry, and General Manager, IPR License, deals with publishers and rights every day. As a platform built to discover, buy, and sell international rights online, IPR License deals daily with the challenges publishers face in this brave new technological world. We asked her to weigh in on how technology can help publishers…or not.

Jane Tappuni, IPR License

PageMajik: How will smart contracts help publishers?

Jane Tappuni: The smart contract can be built onto the blockchain and allow for the IP to be transacted or in simple terms for the creator to make money. Smart contracts help you exchange something of value in a transparent, conflict-free way while avoiding the services of a middleman. In publishing this could mean a better way to transact rights by taking the information out of the publishing organizations and into a blockchain with smart contracts attached that allow for the rights sale to take place.

PageMajik: Do you see smart contracts significantly changing the way publishers handle rights and licensing in the future or will it be a slow adoption over many years in particular sectors of the industry?

Tappuni: Yes I think there is an opportunity to change and improve the way rights and licensing is handled via a blockchain and smart contract solution. This is a massive behavioural shift from using internal, siloed systems into a shared verifiable database of sorts. This change in behaviour could take a long time.

PageMajik: When you work with publishers, what have been their biggest concerns about adopting technological improvements in their business?

Tappuni: Their biggest concern is value for money, return on investment is always the number one concern.

PageMajik: Do you see any downside to publishers relying on technology to help improve their business?

Tappuni: Not as long as publishers choose the right technology tools for the problem they want to solve. All too often organizations implement new software to repeat the processes they already have in place. New technology implementations are a good time to really think about process improvement.

PageMajik: With the adoption of smart contracts to secure rights transactions and track royalties, providing more revenue for publishers and freeing up staff to focus on other work, how do you see the international rights and licensing industry changing? Will there be additional challenges to overcome?

Tappuni: I see this as a possible solution to a huge problem of rights tracking. At the moment publishers use a variety of rights solutions to store their rights data some good and some not so good. This would take the rights storage data out of the silo publishing systems owned by IT and into a secure, accessible arena. The day-to-day role of a rights professional would not change as they would still be performing a rights sales role but using a global blockchain solution as a positive tool to give rights ownership data.

Jane Tappuni has more than 20 years of publishing experience and is currently the General Manager (consulting) at IPR License, a place to discover and buy international rights and permissions online. IPR License is owned by The Frankfurter Buchmesse, Copyright Clearance Center and the China South Publishing & Media Group. Jane is a specialist in publishing technology, with a focus on transactional IP management and solutions and also a graduate of the Oxford University Said Business School Blockchain Strategy Programme.

Is the Science behind AI just Alchemy?

In Primo Levi’s celebrated short story collection The Periodic Table, the story titled “Chromium” illustrates how our collective ways of behaving incorporate procedures whose justification no longer apply over time. For example, when he worked in a paint manufacturing company, he found that a certain batch of paint had turned solid due to an accidental excess of chromium oxide. In response, he added ammonium chloride to the paint to make it liquid again, and recommended to continue doing so until that batch was used up. He then left his job, but when he returned 10 years later, he found that people were still adding ammonium chloride despite the bad batch having long been replaced: "And so my ammonium chloride, by now completely useless and probably a bit harmful, is religiously ground into the chromate anti-rust paint on the shore of that lake, and nobody knows why anymore."

According to AI researcher Ali Rahimi, something analogous is happening in the field of AI research today. Last December, he argued that the use of machine-learning algorithms had become a form of alchemy since the researchers developing and using them don’t know why their algorithms work and why they don’t.

Algorithms are tweaked and tested using trial and error to generate success against benchmarks, but it really isn’t possible to pinpoint whether the success is due to the core algorithm or if the peripheral add-ons were doing all the heavy lifting. Rahimi thinks this is an unhealthy state of affairs and urges greater attention to explanations and finding root causes. He must have been onto something, because his talk received 40 seconds of standing applause from the audience.

Not everyone agrees with Rahimi, however. According to Facebook’s Yann LeCun, Rahimi is fundamentally wrong because while understanding is certainly good wherever you can get it, understanding often only follows the creation of methods, techniques, and even tricks. To then insist that the creation of new technology only takes place where understanding is possible, would be to cripple innovation. He even makes this claim concrete by arguing that this was precisely why neural nets didn’t get the attention they deserved for over ten years.

Still, I get a sense that both Rahimi and LeCun are arguing past each other, because there’s no indication that Rahimi wants the kind of comprehensive understanding that would stifle innovation, as much as a more rigorous approach to avoid pitfalls. In a recent paper, for example, he calls for measures like

  • Breaking down performance measures by different dimensions or categories of the data

  • Full ablation studies of all changes from prior baselines should be included, testing each component change in isolation and a select number in combination.

  • Understanding of model behavior should be informed by intentional sanity checks, such as analysis on counter-factual or counter-usual data outside of the test distribution.

  • Finding and reporting areas where a new method does not perform better than previous baselines.

These are clearly not intended to stop progress, but to ensure a more sustainable model of growth. Still, the question of whether this will actually generate better results is one that cannot be answered through armchair philosophy — we’ll simply have to give these methods a shot and see if they prove fruitful.

The State of Automation - Part 4

During previous weeks we’ve been analysing the impact automation and disruptive technologies will likely have on the publishing industry. We’ve explored the innovations on the horizon and how the different roles in book publishing will be affected by them in the short, mid and long-term future.

Automation will have a massive impact on publishing, there is no doubt whatsoever about that. But whether this impact is negative or positive depends greatly on the industry response. Will publishers let innovation happen to them? Or will they act quickly to understand how new technologies work and can be applied to their organisations, then evolve their working practices and reskill their workforce accordingly?

In The Book Industry Study Group’s “State of Supply Chain” survey conducted earlier this year, 33% of respondents said they were somewhat or very concerned about the potential to be replaced by technology or artificial intelligence. This week, in our final post of this four-part series, we look at survival and what publishers, and those who work in the industry, can do to confront the new reality of what many are calling the fourth industrial revolution.

Knowledge is power

If the last 20 years have taught us anything it’s that rapid innovation can, and will, gobble you up if you’re not prepared for it. And most industries have suffered, some more than others, at the hands of disruptive technologies they were completely ignorant about and ill-prepared to respond to. This is a lesson we all must learn from.

Publishers, who traditionally tend to adopt a rather cautious approach to new technology, will need to know exactly what is around the corner when it comes to automation. Not knowing will mean not being able to respond quickly enough when the world around them is transforming at break-neck speed.

Publishing houses which are aware of these developments, those prepared to take an open-minded approach and start to experiment, and those proactively seeking ways to use automation to their benefit, will automatically be in advantageous positions.

Humans are (still) essential

A survey conducted by Evolve in 2016 revealed that the most in-demand skills in the workplace are “the ability to work cooperatively, flexibly and cohesively”. These soft skills are areas where humans usurp robots (well, at least for the next 15 years, which is when experts are predicting computational power will equal the human brain). Recognising this is key.

While AI will do a fantastic job at automating a variety of tasks, in most cases the incorporation of AI technology is at its most powerful when it interacts with humans and benefits from the creativity, imagination and judgement of the human brain. To this end, being able to harness automation-driven technology and play to its strengths but also to align it with human capabilities, will give publishers an edge.

In the real world, this can be applied in the editorial department, for example, where AI can be used to do the heavy lifting when it comes to proofing manuscripts, but the process will still need to be overseen by human eyes. Or in the production department where AI can be applied to a great deal of production tasks, however taking judgement calls and making business-critical decisions on print runs, for example, will still need to be made by humans.

Next gen workforce

Many believe that in a world of automation the only people who will survive will be those who came out of the womb coding and that only employees with an intimate understanding of the latest tech will be of any use in the future. Although rather exaggerated, this is to some extent true. As technology will play a much more influential role in our working lives, job seekers who are tech savvy and can prove that they have the ability to work alongside the latest innovations will always have a cutting edge.

However, on the other side of the coin another view is that widespread automation will make those who have heightened emotional intelligence and a softer skill base more in demand, as reflected by this article on “automation resistant skills” in the BBC.

Either way, it’s highly likely that those who present an innate understanding of technology and a willingness to work with it, while also demonstrating a range of emotional skills will be the most likely to thrive in an automated workplace, and it is these types of candidates who will be most valuable to publishers.

Automation is going to change book publishing as we know it beyond all recognition. It will be as gradual as it will be sudden. It will be as beneficial as it will be damaging. Publishers will flourish and perish, and employees will gain and lose. This is what has happened during every major period of disruption since the dawn of time. But the industry has a small window of opportunity to at least learn about how the publishing business might be affected and what sort of steps can be taken to exploit opportunities afforded by automation as opposed to getting left behind.