What are Smart Contracts? (And Why Do Publishers Need Them?)

In last week’s blog post, we discussed how blockchain can help publishers increase revenue by automating rights information and creating “smart contracts” which could speed up the sales and licensing process. But, what exactly are smart contracts, how are they generated, and why should publishers consider using them?

Originally coined by developer Nick Szabo in 1995 in an article called “Smart Contracts” in Extropy magazine, smart contracts can digitally facilitate, verify, and enforce an agreement between two parties in a trackable way using algorithms. Each party can see the progress of the other throughout the course of the contract process, without needing to be in the same room. As described by Tsui S. Ng in Business Law Today, “The term ‘smart contracts’ refers to computer transaction protocols that execute the terms of a contract automatically based on a set of conditions.” By translating the contract terms into a series of if-then functions, the smart contract is able to respond as each condition is met and move on to the next. Legal agreements can be struck almost instantly.

Though Szabo originally thought of the idea for smart contracts in the mid-1990s, it has only been through the use of blockchain that smart contracts have begun to be utilized in the marketplace. Blockchain provides the security, the real-time tracking, and accountability that allows smart contracts to be more viable for important transactions. And, these smart contracts could help companies become quite lucrative.

According to an article in Forbes, “Accenture research published at the start of 2017 showed investment banks alone could save up to $12 billion per year by adopting blockchain and smart contracts.”

For publishers, the world of contracts unfortunately continues to be predominantly ruled by paper, creating a lag in transactional payment and royalty collection. But, that doesn’t have to be the case going forward.

With the security and speed of smart contracts, publishers could dramatically change their business. “Smart contracts don’t just contain the terms of a contract but also can act in programmed ways, delivering aspects of an agreement once specific terms are fulfilled. If connected to additional resources, such as distribution networks as well as online and physical stores, the contract could automatically deal with recouping costs and paying royalties,” Tom Cox, development director for IPR License, wrote in a piece for Publishing Perspectives last fall. “If the contracts were sophisticated enough, the complex area of royalties could be handled in almost real time by the system.”

For publishers who are finding that rights transactions are even more essential to their bottom-lines, implementing a system that uses smart contracts could revolutionize their business and greatly increase revenue.

Blockchain and the Future of Publishing

In the last six months, the term “blockchain” has been cropping up in publishing conversations—at the London Book Fair earlier this month and both last week’s STM Conference and Book Industry Study Group annual meeting. As these conversations occur, it is becoming clear that to many publishers the term is as foreign as “metadata” once was, with publishers unclear as to if and how this technology will impact their business. In our series on blockchain, we thought it might be helpful to start by taking a step back and defining what blockchain is, before sharing how it can change publishing for the better.

Blockchain is a decentralized, digitized series of information blocks shared in a peer-to-peer network. Each block includes information from the previous block, a timestamp, and transaction data, all providing a unique and unalterable chain of information. Blockchain is the technology behind the popular cryptocurrency Bitcoin, and, for the publishing industry, it could change the way business is transacted by helping solve many of the issues currently plaguing publishers, from rights management to piracy.

This is true not only for the scholarly publishing community, but independent, trade, journal, magazine, and any other kind of content publishing. Because blockchain technology is decentralized and secure, the most practical, long-term impact of its use in publishing will be to allow researchers, members of a publishing house, writers and publishers to work on the same platform at the same time, providing their individual input and ensuring universal access and secure collaboration. Blockchain allows all parties to work at the same time, see what changes have been made, and have those changes attributed to the appropriate party.

One of the key ways that it can become immediately useful is through digital rights management. A time-consuming and difficult job for the licensor is tracking down ownership, permissions costs, and locating an appropriate person to speak to about licensing content and photos. For the licensee, it is often a challenge to accurately track usage of content once access has been granted, meaning potential lost financial opportunity.

Through blockchain management of rights, content can be embedded with rights information, and smart contracts can be created that allow for easy sharing, licensing, and usage. For publishers, this will increase revenue not only through automating the rights information and freeing up staff to do other high-level work, but also by helping keep track of important contractual components including monies due and rights availability. As demand for more granular rights increase, this type of technology will be even more vital for proficiency with sales transactions, tracking, and reporting, and ultimately to the publisher’s bottom line.

Because of these advances and their opportunities for publishers, we are currently implementing blockchain into the next version of our workflow management system, PageMajik, to continue to improve the free flow of information into the marketplace by easing the workflow constraints and reducing many time-consuming tasks in the publishing value chain. To users of PageMajik, their workflow will not be impacted but their work will be much more secure. By improving these systems and giving writers and publishers the ability to easily write and publish their work, we hope to help change the future of publishing.

Blockchain and STM—a marriage made in heaven?

Two weeks ago, in our blog post the AI Elephant in the room, we welcomed the fact that blockchain was to be discussed at the London Book Fair for the very first time. This week, as the crowds descend upon Philadelphia for the STM US Annual Conference, blockchain is once again on the menu, however, less as a starter and more as a main course. This is the second year running that the STM Association has featured the topic in its conference programme, and it follows a similar session at the APE Conference in January where it was also on the agenda.

It perhaps comes as little surprise that the STM sector is somewhat ahead of the curve on conversations around blockchain innovation. Whether STM is riper for disruption, open to change, or just more in need of it remains to be seen, but over recent years, in spite of many bemoaning slow rates of change and adoption, we’ve witnessed a great deal of effort go into transformational technology in STM, specifically in areas like Open Access, discoverability, metrics and impact measurement, and peer review.

So why is blockchain such a hot topic in STM right now? What kind of blockchain innovations can we expect to see? And how does the industry stand to gain from them?

It’s all about trust

The fact that STM had a head-start on blockchain may quite simply point to a greater need for it. In October 2017, publishers took academic social networking site ResearchGate to court for mass scale copyright infringement. It was the most recent in a long line of high profile cases which have highlighted the flaws in a system still grappling with the new normal of Open Access, social media and big data.

The industry is plagued with disputes around ownership, provenance, authenticity and credibility, and battles are regularly fought around the plagiarism and misappropriation of scientific endeavours. STM’s history of trust issues, who-said-what clashes and copyright court cases, makes it the perfect stomping ground for blockchain technologies. Whether new industry-wide initiatives driven by blockchain are rolled out or companies start to embed parts of blockchain technology as part of their individual ecosystems, scholarly communication could undoubtedly benefit from unequivocal, time-stamped records for every submission, citation, edit or transaction taking place along the chain. If any industry could do with a “Network of trust”, which is what the STM Association is billing blockchain, it’s STM.

Another area of STM publishing where many are predicting blockchain will make inroads is peer review. Whilst widely considered the bedrock of academic publishing, traditional peer review frequently comes under fire, particularly for slowing down the publishing process. In Blockchain for Research: Perspective on a New Paradigm for Scholarly Communication, a paper published by Dr Joris Van Rossum of Digital Science, he suggests that: “The peer review process could greatly improve through blockchain and data underlying the published results could be made available. This would not only improve reproducibility in general, but also allows reviewers to do their work more thoroughly.”

Meanwhile, as new wave journal publishers, like UK-based Veruscript, seek to reward reviewers in an effort to make the peer review system more efficient and streamlined, there would inevitably be scope to implement Bitcoin type technology to facilitate this process.

Blockchain in action

Last week, Digital Science announced its first round of Blockchain Catalyst grants, which are awarded to “any project implementing blockchain in a scholarly or scientific context, especially those that address the dissemination of research”. The initiative was established to find, support, fund and fly the flag for those using blockchain to innovate within the sector.

The publication of the first two projects to be awarded this grant provided a fascinating insight into where and how we might see blockchain technology applied to research in the not so distant future. Hong Kong-based Datax are developing a data crowdsourcing and exchange platform while VIVO from the US are working on a value recognition tool which rewards and incentivises researchers for their contributions.

Equally exciting is the new pilot initiative from ARTiFACTS, which launches this week, using blockchain to record a “permanent, valid and immutable” chain of records in real time, from research to peer review to post-publication.

Scholarly publishers are also discovering that blockchain can offer plenty of benefits in terms of helping them fine-tune and automate day-to-day processes. In a business like STM journal publishing, where a publisher is likely to have a range of journals to manage, with multiple articles and papers on the go, and teams of staff working across editorial and production, blockchain can offer a lifeline when it comes to version control, providing clarity on ownership and navigating digital rights management.

In the world of STM, blockchain makes perfect sense. There are several very obvious areas where this technology could be applied to great effect while making a huge impact and not necessarily forcing scholarly publishers to reinvent the wheel. It’s refreshing to see new initiatives incorporating blockchain being trialled, while others are in the works, and perhaps unsurprising to see STM as the market sector forging ahead and testing the waters before others.

The AI elephant in the room

Two years ago, almost to the day, Oxford University Professor Nick Bostrom, the Founding Director of the Future of Humanity Institute, addressed the crowd at The London Book Fair’s Quantum Conference and gave a riveting keynote talk entitled “The Machine Intelligence Revolution”.

During his presentation, he compared the likely impact of machine intelligence to that of the industrial revolution—with the latter automating manual labour and the former automating intellectual labour. He also predicted that its legacy and impact on the human condition will be even more profound, and that by 2040 we will see machines capable of human-level intelligence, and very shortly after, machines achieving super-intelligence.

While the audience at the time was familiar with terms such as Big Data and augmented reality (AR), the discussion was probably the first time many had been introduced to concepts such as AI and deep learning. On that day, Bostrom didn’t tackle the elephant in the room: “What impact will the machine intelligence revolution have on publishing?”, but the future-gazing talk put the subject on the map and gave the industry something to think long and hard about.

At the time, several delegates dismissed the content of his talk as the stuff of science fiction, a million miles away from their day job of publishing books. For some others, however, it was the starting point of a journey of introspection, where they started to ask themselves important questions such as: How can publishers benefit from machine intelligence? What will the publisher of tomorrow look like? What are the key skills which will be needed? Which roles are likely to be affected by this machine intelligence revolution? And when and how will we need to adapt our models and working practices?

Fast forward two years and, bar a few presentations from technology brands at LBF’s technology stage, Artificial Intelligence seems to have lost its prominence in the seminar programme. It remains to be seen whether or not this is because the industry is more concerned about perceived pressing short-term issues like cashing in on the growth of audiobooks and navigating global economic issues such as Brexit. It is also unclear whether and to what extent publishers today have a clear idea about its practical applications, how it will affect their businesses, and how they will adapt their practices to accommodate it, instead of being disrupted by it.


In spite of this omission, and while Bostrom’s elephant in the room arguably still remains (particularly outside technology circles), significantly topics such as Blockchain and crypto culture have come to the fore in this year’s LBF seminar programme. It is refreshing to see certain pockets of the industry, such as the academic, children’s, and the self-publishing markets, leading the way and debating these innovations on this global stage. Here are PageMajik’s top ten picks from LBF’s speaking programme, for those looking to expand their minds and look to the future:

Discoverability, Superabundance and How to Rise to the Fore

Monday, 9 April 2018, 11:30–12:15

Quantum Conference (the conference centre)

Use your Data to Drive Revenue

Tuesday, 10 April 2018, 13:00–14:00

The Faculty

Blockchain For Books: Towards An Author Centred Payment Model

Tuesday, 10 April 2018, 14:30–15:30

Olympia Room Grand Hall

Taking the Fear Out of AI: Machine Versus Human, or Technology Enabler for Humanity

Tuesday, 10 April 2018, 15:15–15:45

The Buzz Theatre

Bringing Blockchain to Publishing: Funding Books Like Never Before

Tuesday, 10 April 2018, 15:45–16:30

Author HQ

Scaling Foreign Rights and Reprints With Automation

Tuesday, 10 April 2018, 16:00–17:00

International Export Theatre

Small Steps, Giant Leaps: The Digital Transformation Experience

Wednesday, 11 April 2018, 13:00–14:00

The Faculty

Meeting the Changing Needs of Academic Publishing

Thursday, 12 April 2018, 11:30–12:30

The Faculty

Get A Self-Publishing 3.0 Mindset (ALLi)

Thursday, 12 April 2018, 11:45–12:30

Author HQ

Disruptive Publishing

Thursday, 12 April 2018, 14:30–15:30

Children’s Hub

How the prospering independent publishing sector can become even more prosperous

As indie publishers gather later this week in Austin, Texas, for the annual IBPA Publishing University event, attendees will be buoyed by all the positive news and buzz currently enveloping the sector. Indie presses around the globe are reporting strong growth figures year-on-year. In the UK, Inpress revealed a 79 percent increase in sales across 60 small publishers at the back end of 2017. Meanwhile in Publishers Weekly’s annual feature on fast growing US independents last April, half of the companies featured reported triple-figure growth, making 2017 the strongest year for the sector since the publication started its deep dive report 20 years ago.

In a world where big name bestselling authors get snapped up by commercially savvy publishers for seven figure advance deals, and lesser known names flock to Amazon’s self-publishing platforms in the thousands, indies occupy the increasingly important middle-ground.. But what is it exactly that makes indies so appealing? And how can they build yet more on this seemingly unstoppable growth and success?

There’s something about indies

Indies tend to go about things differently compared to your average publisher, often assessing writers and their work on literary merit as opposed to commercial gain. This is appealing to many authors who, aside from wanting to make money, also want to feel that their publisher has love and passion for their work. In addition, indies are also known to take a longer-term view, investing in a writer’s career journey, rather than working with them on a title-by-title basis.

Some authors sign up to indies because they want a publishing house which shares their values and mission, while others have previously published books elsewhere but claim they didn’t receive the editorial input or the attention, commitment and dedication they felt they needed. This is a sentiment echoed by Betsy Reavley, co-founder of Bloodhound Books, in her recent interview with the Daily Telegraph: “Some publishers will get behind a particular writer, spending most of their marketing budget on them and leaving others to languish somewhat. Of course it’s about selling lots of books and making money, but it’s also about being transparent, fair and giving the same opportunity to everyone.”

In essence, author care is very much where the indies excel.

Growing pains

But as independent publishers become larger, growing their author bases and lists each year, the inevitable tends to happen. The more they take on, often without extended resources, the more difficult it becomes to offer consistent levels of care for the author which made them such an attractive proposition in the first place. Time that was previously spent editing manuscripts, accompanying authors on tours, and marketing and promoting their books, is now spent on increasingly unmanageable workflow processes, which become a major drain on resources.

When indies expand exponentially, as they so frequently do, most do not have the appropriate IT infrastructure or tools at their disposal to cope with the dramatically increased volume of books which come their way. Their productivity is hampered and during this process of expansion the publisher’s duty of care to the author, their primary USP, is eroded.

Resolving workflow issues early

The best way to avoid this unpleasant situation from arising is to address the inevitable workflow problems as early as possible. Whether you’re a large publisher or whether you publish less than 50 titles a year, you will eventually find that keeping up with editorial processes, multiple versions, typesetting, proofing, image rights managements, and cover design, across multiple books, becomes arduous and time-consuming. This is the right time to invest in a software solution that can take on the heavy lifting in the workflow.

At PageMajik we work closely with independent publishers of all shapes and sizes to help make their publishing processes simpler. Our publishing workflow productivity tool takes the rigour out of publishing and can boost efficiencies of as much as 40 percent, allowing indies to get back to being indies and do what they do best.

The Future of Research: What is the Answer?

In scholarly publishing today, there is an on-going debate about the efficiency and accuracy of workflows, and the security of current publishing models. Digital publishing improved speed to publication and open access provided a simplified and democratized way of sharing research, but these technological advances also brought the threat of piracy, the ease of plagiarism, and the ability for researchers to publish directly, providing a flood of information for researchers to wade through in order to find something useful.

Eefke Smit, Director of Standards and Technology for the International Association of STM Publishers, made a statement last fall, “The STM publishing world is suffering its own set of trust issues at present. But even with its imperfections, the current system of academic publishing is strong and offers an efficient infrastructure.”

Others disagree.

Piracy and Plagiarism

In this digital world, it is easy for readers to download content for free and pass off research or ideas as their own.

The last year has seen many in the scholarly community discussing how the technology blockchain — a decentralized, digitized series of information blocks shared in a peer-to-peer network — could not only eliminate plagiarism altogether, but also provide the ability for researchers to collaborate on their work more effectively.

Blockchain features individual blocks with transaction data, timestamps, and the creator’s information, plus the information from the previous block as a unique and unalterable chain of information. Because each information block can be directly attributed to the author/creator, that makes collaboration simple, speeding up the research process immensely.

Last fall, Joris Van Rossum, Special Projects Director at Digital Science, published a report entitled “Blockchain for Research: Perspectives on a New Paradigm for Scholarly Communication” which outlines a number of ways in which scholarly publishing can benefit from the use of blockchain, both from a security and ease of rights management perspective.

Profitability

As mentioned above, blockchain can also be used for the management of rights. Content blocks can be embedded with rights information and a Smart contract that allows easy sharing, licensing, and usage. For example, if a writer wants to use an image to illustrate a journal article, they can easily track down who holds the rights, find out the licensing cost, and who to contact in order to secure permissions, all in a matter of moments.

For publishers, this will increase revenue not only through automating the rights work and freeing up staff to do other high-level work, but it will also empower them to keep track of monies due, and available rights which can be exploited.

Discoverability

One of the struggles researchers, academics, and publishers now face is the saturation and sea of information which now exists as a result of Open Access. Making content discoverable and searchable has become one of the main challenges and concerns keeping publishers awake at night.

In recent years many of the innovations coming through in the industry have been geared towards troubleshooting in this arena. We’ve seen article-level initiatives like ORCID and Crossref come to the fore and become increasingly adopted by publishers.

Many are predicting that now that publishers have mastered metadata, SEO, and are increasingly incorporating article-level innovations, the next major step will be the adoption of AI technology. Beyond the hype and from a practical perspective it has been widely predicted that AI will have an impact on publishers’ endless quest for improved discoverability, but also by driving efficiency in the editorial workflow.

Through our product suite, PageMajik, we implement tools to improve the free-flow of information into the marketplace by easing the workflow constraints and time-consuming tasks in the publishing value chain from author to publisher to reader. By improving these systems and allowing writers and publishers easily write and publish their work, we hope to play a major role in informing the future of research.

An Antidote to the Curse of Knowledge

How workflows can help manage cognitive biases that complicate and delay work

When celebrated cognitive psychologist Steven Pinker was recently asked what he considered to be the greatest impediment to clear communication, he named the “curse of knowledge” cognitive bias. This is the phenomenon where a person who knows something finds it extremely difficult to imagine what it is like to not know it. This can lead to the knowledgeable person using jargon, providing inadequate explanations, and skipping steps in descriptions. For anyone who works with others, these problems are familiar, incredibly frustrating, and until now, seemingly inescapable.

A particularly dramatic case of this is Leonard Jacobs’ tale of his week from hell freelancing as blog manager at an unnamed financial publishing company. Starting from an incredibly vague job description that didn’t go beyond the requirement that he “manage the blog”, to not having the relevant people informed about his arrival, to different supervisors giving inconsistent feedback, before concluding with his dismissal. This was a debacle by any measure. But the question remains: how did this happen? And why do incidents like this continue to happen so frequently in the workplace?

There are two common reasons that can be proposed: people are evil or they are plain incompetent. While the tales of sadistic bosses are certainly common enough, the publishing company employees who feature in Jacobs’ tale, “Maria” and “Buehler,” hardly seem wicked. And while they do seem somewhat incompetent, it might very well have turned out that they were excellent at their own jobs. I suggest the better explanation is Pinker’s “curse of knowledge”, where communication breaks down in communities, not due to individual incompetence, but due to people’s inability to imagine how it feels for another person to not know something. After all, the person who wrote the report made up of “just three words” probably thought it made sense, but only because they could not see in the moment how much background information would actually be required for someone else to understand.


Pinker’s own advice to manage this bias is to choose words more carefully and test out messaging. The problems with solutions like these are two-fold. The first is that messaging can only work if you know who the intended audience for a message is. In large organizations this is simply not going to be possible since this might not be decided until later. Moreover, the root of this bias is that people are for the most part unaware that they are being unclear, so even if they tried choosing words more carefully, they could still continue to be totally opaque.

While no silver bullet for this problem exists, a technological solution that organizations increasingly rely on is the workflow. As Wikipedia defines it:

A workflow consists of an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes that transform materials, provide services, or process information.

To put it simply, a workflow is a way of formalizing instructions and rules to govern how the workplace functions, which allocates roles, rights, and responsibilities to the various people involved in a project. This doesn’t make people communicate better, but it brings about a situation where they don’t have to. Now, instructions won’t have to be interpreted from a few cryptic words, since they will be embedded within the system itself. This also means that people don’t have to spend time and attention trying to remember what the latest set of instructions are—they can just mechanically submit and let the pre-set instructions take over. And the use of automatically assigned templates can be used to make clear that there are expectations to be met and so certain kinds of reports—Jacobs’ three word ones, for example—will simply not do.

And the best part is that with sophisticated workflow tools, the sheer range of options available ensures that the chosen workflow doesn’t have to be any more constricting than necessary. Human behavior is never going to be as rational or as clear as we would like, but that is no reason not to seek ways to optimize and streamline things as much as possible.

Removing the Pain Points in Journal Publishing

In December, David Crotty, Editorial Director, Journals Policy for Oxford University Press, published a piece in Scholarly Kitchen lamenting the shutdown of Aperta, the workflow solution created by Public Library of Science (PLOS), giving voice to the disappointment of the research community which has had “high hopes for much-needed improvements in the manuscript submission process.”

More than a decade ago, when journals and their submission process became digitized, researchers rejoiced at the speed and ease at which their work could be published and how that would change the future of scholarly publishing.  What they had not anticipated was how unnecessarily complicated the submission process could become.

As Crotty notes, PLOS ran into trouble when working with different editorial teams.  Each publisher has their own format and style, and submissions from researchers come in a variety of formats, with new media being added to submissions all the time—from charts to photos to videos.  Publishers have their own individual workflow systems, and scientists and researchers want to publish their findings in an effort to further discovery and don’t have time to figure out each individual, often labor-intensive, process.  Plus, once you do figure out the submission process, as Phill Jones, Director of Publishing Innovation at Digital Science, notes in an article in Scholarly Kitchen, “People complain about slow upload speeds and poorly designed workflows that mean they have to babysit a submission for several hours.”  Every effort to create a uniform, efficient submission process across all publishers has been unsuccessful. 

As Jones suggests, “My advice would be for publishers to try out their submission systems themselves (under realistic conditions, with large files and multiple authors) and see how much of a pain they are to use. If you do this, you’ll probably see some easy wins.”

With Open Access and the increased use of social media, the future might see researchers electing to publish and promote directly to the research community, bypassing journal publishers altogether.  What journal publishers are realizing is that their future could be unstable if they don’t implement a change in their publishing process.

If publishers cannot agree on one uniform style guide, then what they need is a system that easily adapts to each individual publisher’s needs, while making the submission process as simple as possible for writers.  

For the last two years, the team at PageMajik has been working with large and small publishers on developing a workflow solution that deals with these very issues. 

The team has created a cloud-based system that allows each publisher to pre-set their specific requirements to adapt any submission automatically to the required format.  The system also highlights any missing elements so writers can easily add those in and complete the submission process quickly and easily.  This bespoke solution allows submissions of all types to be transformed into an easily-publishable format which will help reduce publishing gridlock on both the writer’s and publisher’s sides, and help researchers get their work out into the world more quickly.  

As digital publishing becomes more a part of our lives, eliminating the pain points for both researchers and publishers alike will help traditional journal publishers retain their position in the publishing landscape for the foreseeable future, improve research’s speed to market, and bolster the scholarly community’s ability to produce top-notch work.

Re-examining the Publishing Value Chain

For the last decade, the traditional publishing industry has been contracting.  The rise of digital publishing, self-publishing, and open access, coupled with the worldwide recession forced publishers, large and small, to conduct massive layoffs.  In order to maintain profit margins, publishers have had to publish an increasing number of books with a further dwindling workforce.

The Challenge to Traditional Publishing Models

This increasing workload often means that there are situations of inattention, including typo-filled publications, and lack of understanding of the impact of a publication on the marketplace, like the one last year with Usborne and the recall of Alex Frith’s Growing Up For Boys which lead to a controversy around objectifying women.

As these issues occur more frequently and rapid  direct-to-consumer publishing models like Kindle Direct Publishing or Lulu become more popular, traditional publishers see their role threatened.  Publishers must re-examine the value chain and focus energy and budget on the most important roles they play—in the curation, editing, and promotion of fine, informative, and entertaining books and journals. 

Embracing the Future

In order to best do that, publishers must commit to improving what is a somewhat time-consuming and outdated publishing process and embracing technology where it can be helpful in making the system more efficient, freeing up humans to focus on higher-level work. 

Publishers have often been very reticent to embrace technology due to the cost, the training time for their staff on a new system, and a lack of proven effectiveness. Yet, when publishers embraced the importance of metadata, they found that their books were catalogued better and discovered more easily.  Now that they have that in hand, it’s important for publishers to look to the next technological solution for their challenges.

Technology and digital publishing may have forced publishers to deal with a changing marketplace, but technology can also offer traditional publishers a chance to update systems that will improve workflow and efficiency and ultimately generate increased revenue.  From rights management systems to better identification of rights holdings, sales automation, and predictive technology to help with more profitable acquisitions, to name a few, technology and machine learning have helped publishers to better take control of their bottom line. 

Trusting the Machines

Specifically, the addition of machine learning into the publishing process is crucial.  As Tim O’Reilly, O’Reilly Media Founder, noted at last fall’s W3C meeting, artificial intelligence and machine learning could help publishers with their essential problem “matching up the people who know something or have a story to tell with the people who want to find them.”  Machine learning can learn and improve upon publisher formats and systems, eliminate human error, take on some of these tasks that are time-consuming, and better help analyze and understand readers’ needs. 

It is in the day-to-day publishing process that publishers most need a system that will automate redundant tasks and put all assets and project management in one integrated system, so that publishers can focus on the higher level tasks of acquiring and publishing books and journals well.  A system that helps all members of the publishing team, from author to editor to production, would allow publishers to be more efficient, let them be better able to respond quickly to trends, and digitize and update backlist more rapidly, thus allowing them to reclaim their roles in the industry.

How can New University Presses be more disruptive?

At the Researcher to Reader conference in London last week, New University Presses (NUPs) and Academic-led Publishers (ALPs) were very much the hot topics on the agenda. With as many as 19 NUPs becoming operational in the UK in recent years (including the likes of White Rose University Press, UCL, and Cardiff University Press), there is a perceptible shift taking place in academic publishing, one which aims to put academics and institutions at the centre, prioritising their needs above all else. Many believe that this trend will be the most disruptive development the industry has seen since Open Access, once again transforming the role of publishers. But how real is the threat they actually pose? And what role will technology play in this story?

Technology is very much at the heart of everything these new outfits do. They predominantly champion digital-first business models, with the production of print products across monographs, books, and journals, usually via Print on Demand, only as secondary propositions. They are Open Access advocates through and through, driven by a need to disseminate research on the largest possible scale to meet the demands of scholars. They are increasingly investing in affordable technology and service options, which can help them establish a strong infrastructure and better manage their workflows on a day-to-day basis. And they do all this at a relatively low operational cost – their goal is not to generate revenue and they tend not to have article or book processing charges.

The resource issue

While many technological innovations have dramatically reduced NUP set-up and running costs, a lack of human resource has always been, and still is, the main stumbling block barring their growth, with most NUPs operating with just one full time member of staff. As they establish themselves, many NUPs are set up out of scholarly libraries, and the running of the Press becomes one in a long list of tasks the stretched, modern-day librarian must undertake. Even when an NUP is established as a separate entity with its own dedicated resources, they typically lack adequate resources to compete with more established publishers, limiting how much research can realistically be processed, disseminated, and marketed effectively.

This resourcing issue means that while some academics will indubitably choose to publish their research via their institution’s Press, it is unlikely that an NUP in the early stages of its trajectory will be able to publish the vast majority of the work of their home universities’ academics. So, while by nature NUPs may be perceived as radically disruptive to the hegemony of traditional publishers, when you look at the metrics of volume, scale, and resources, it is unlikely that they pose a real threat to their business, at least at present.

Machine learning in the workflow

One of the main challenges NUP employees face is the need to constantly juggle tasks. Staff spend far too long on editorial procedures such as indexing and inputting metadata manually to make sure research is discoverable, and end up spending very little time promoting and marketing the work so that researchers can find it. The systems most publishers have entrenched make these processes slow and arduous, not to mention susceptible to human error.

This is what makes developments in machine learning technology, and their introduction into the publishing process, so exciting, particularly for resource starved NUPs. By introducing machine learning into the workflow, we estimate that publishers can free up around 40 per cent of the time spent on manual editorial tasks. By automating these processes, NUP staff can focus instead on adding real value where human attention is needed most – on higher level work such as promoting journals and books to ensure that they reach more eyes around the globe, and actually become the disruptive threat traditional publishing fears.

Subscribe!