How would you describe your publishing process – in one word?

It was a regular Friday morning meeting at PageMajik. As Billy Joel correctly predicted decades ago, the regular crowd shuffled in. Before we all sat down and got comfortable, someone popped a question.

“If you were to describe a publishing process in just one word, how would you say it?” We stared at him. “Go on, what would that magic word be?”

“Whose publishing process?” someone else shot back.

“It could be a client’s publishing cycle. Or it could be your definition of what a good publishing process should be.”

We began in a clockwise order. “Simple.” “Easy.” “Convenient?” “Efficient.” “Quick.”

“But shouldn’t a publishing process be all of these?”

“What about smart?”

“Sure, but that’s more technology-oriented, don’t you think?”

“Profitable?”

“That covers the business angle, but what else?”

“Classy?”

“That is descriptive of the end-product…”

“Stress-free?”

“I like it. It is customer focused…”

“How about ‘friendly’?”

“Hmm…”

“Or secure?”

“Isn’t that just one aspect of what one would want a publishing process to be?”

“Also, aren’t some these words interconnected? For instance, your publishing can be friendly only if it’s simple and stress-free…”

“Likewise, it can be efficient only if it’s both convenient and quick.”

“Agreed, so where are we with our magic word?”

“From PageMajik’s perspective, I would say complete.”

“Why would you say that?”

“Take a look at the publishing processes used by most small and mid-sized publishers. They have to outsource many of their intermediate steps.”

“Another thing to remember is that they don’t have the bandwidth for say, pre-contract validation or gauging user experience. PageMajik can help them with not just these, but also with contract management, audience assessment, market assessment, proposal reviews…”

“In that case, what the one word you’d associate it with?”

“Complete, I would say…”

“How about ‘end-to-end’?”

“That’s quite ‘comprehensive’, too…”

“Sure, our tool suite ensures that the publishing process is complete, but what does it mean to the publisher?”

“Since it brings all the bits n pieces of the process under one umbrella, it offers him something very important.”

“Which is…?”

“Control.”

“He’s now well and truly in charge of his publishing. With PageMajik, all the pieces fall into place and the publisher is in complete control.”

“You mean the publishing is complete and the publisher is in control.”

“Have it your way.”

“But you just can’t talk about end-to-end publishing or how it offers control to the publisher. You’ll have to show how PageMajik makes it possible.”

“Absolutely. We have the perfect platform for this at the Frankfurt Book Fair.”

“Great, we’re going to be at Hall 4.2, Stand M86. And we’ll be looking forward to take them through a quick demo of how PageMajik makes their publishing complete.”

As a publisher, how would you describe your publishing process?

And if you were to do that in a word, how would you say it?

Visit us at the PageMajik stall and share your views on this. Remember, we’re at Hall 4.2, Stand M86.

Trust, but verify

One popular conception of science stresses the need to always question, to always remain skeptical. However, given that scientific work requires the coordination of a massive number of people scattered across the world and across disciplines, it is the ability to trust the work others are doing that allows scientists to build on with their own. The obvious question then is why do people trust each other?

In his book A Social History of Truth, the historian and sociologist of science Steven Shapin offers a surprising answer to how this trust initially came about. Science until the mid-19th century was primarily the pursuit of gentlemen. Birth, wealth, and behaviour were used to judge who was and wasn’t reliable. If a man was wealthy (and it was always a man), it was assumed that he had nothing to gain and plenty to lose in lying about results, since he was financially independent and was embedded in a culture of honour. Gentlemen trusted each other not because they naively believed good science was inevitable but because of non-scientific facts about their mutual social status.

Of course as time passed, this gatekeeping of science ended and anyone (in principle at least) could pursue science. In this context, why trust anyone else? Of course, most scientists are committed to truth-finding, and the repercussions of being found out serves as a strong deterrent to anyone tempted. But in our era of publish-or-perish, short term cheating and sloppiness might still be tempting to many. In fact, there is already a prominent case of this happening.

In December 2014, then UCLA political science graduate student Michael LaCour and Columbia University political science professor Donald Green published a paper in Science titled “When contact changes minds: An experiment on transmission of support for gay equality.” According to this, door-to-door canvassers who were gay were better than their straight counterparts in convincing voters to support same-sex marriage in the long-term. The study was picked up and touted in several major media outlets including The New York Times, The Washington Post, and The Wall Street Journal. By chance, two grad students at UC Berkley, David Broockman and Joshua Kalla, were trying to carry out a similar study and during their attempt to replicate LaCour and Green’s result, realised that the original paper had fabricated its data. They published their expose “Irregularities in LaCour,” and the paper was retracted.

This episode itself is fascinating, but what I would like to draw attention to is how such an error had occurred. Green, although the senior researcher, had never even seen the data which LaCour had fabricated and had instead taken it on faith. When later asked why, Green saidIt’s a very delicate situation when a senior scholar makes a move to look at a junior scholar’s data set. This is his career, and if I reach in and grab it, it may seem like I’m boxing him out.” In response, Ivan Oransky, aco-founder of Retraction Watch said, “At the end of the day he decided to trust LaCour, which was, in his own words, a mistake.The New York Times article where both of them were quoted summarized with “The scientific community’s system for vetting new findings, built on trust, is poorly equipped to detect deliberate misrepresentations.

What this episode reveals is that our procedures are, for the most part, still based on trust, making it vulnerable. Reflecting on the LaCour retraction, C. K. Gunsalus, Director of the National Center for Professional and Research Ethics, advocated for greater openness, even titling the piece “If you think it’s rude to ask to look at your co-authors’ data, you’re not doing science.” This really is a fantastic piece, but the one place I’d like to disagree is that many of the suggestions place all the responsibility on authors themselves to institute good practices. I think a better idea is to try to build a culture of responsibility institutionally rather than on individual choice. If collaborators feel uncomfortable asking each other for data or their sources of funding, then the only way around this is to mandate that they do so.

Of course, even this won’t stop all fraud. Multiple authors can still fabricate results together, and can be too lazy and lie about verifying colleague’s work. And this would probably feel too top-down for some academics, who might feel having to fill-in institutionally mandated information at every significant stage of their work tiresome. But if we want a culture of robust checks and balances, we need to start working towards such a framework.

How the History of Peer Review can help us think better about change

Tech thrives on disruptive innovation, so it comes as no surprise that publishers regard proposals from publishing tech with suspicion — after all, why change something that works? At least part of this resistance can be traced to a tacit understanding of the current system as having been in place for a significant amount of time. A look at history disabuses us of this.

Consider modern peer review. For anyone even loosely associated with academia, the process of submitting a draft to a journal, which will be anonymously evaluated by two or three referees, is probably familiar and taken for granted as simply the way things are done. In a recent publication in the History of Science journal Isis, Melinda Baldwin, a senior editor of Physics Today, argues that this norm of compulsory peer review is more recent that most of us would imagine. Here’s her narrative in brief.

Although the sending of a submission to experts for their comments can be traced back to at least Henry Oldenburg, the first Secretary of the Royal Society, peer review was neither systematically carried out or seen to bestow scientific credibility for centuries. A more familiar system was proposed by William Whewell in 1831, when he proposed that submissions in the Philosophical Transactions should have two Fellows of the Royal Society comment openly in the new journal Proceedings of the Royal Society of London. Whewell’s proposal for reports to be published was never picked up, but it became increasingly common to send submissions out to anonymous referees.

Still, even in the mid-20th century, it was not uncommon for all editorial decisions to be made in-house, with editors making all the decisions about a paper and only consulting external referees occasionally, whenever they deemed it essential. The shift to external peer review was brought about because of the increasing amount of work that editors had to do. For example, editors at Science reasoned that “the job of refereeing and suggesting revisions for hundreds of technical papers is neither the best use of their time nor pleasant, satisfying work.” It was simply the increased burden that gave rise to the popularity of external review.

As for the perception that peer review was crucial to scientific legitimacy, Baldwin argues that we need to look at the specific history of the late 20th century United States. The Cold War led to a ballooning of science spending, and soon this increase was noticed by the public and came under scrutiny and skepticism. Under pressure to become more accountable to non-scientific political actors, modern peer review was touted as the only solution that could ensure both scientific autonomy and public accountability. At the end of this saga, Baldwin argues that it was accepted that any [scientific] organization had to rely on external referees in order to judge “good science” properly.

The point of looking at this fascinating history is not to simplistically argue that peer review should be done away with because it is new or because it has a history that is embedded in a particular political culture. After all, any aspect of how things are done can be deconstructed in this manner. However, what histories like this make clear is that no part of the rules we abide by and the institutions that bind us are eternal or set in stone. Any proposed change, however initially surprising, should be given a fair shot instead of being resisted because of procedural inertia or complacence. Change is always around the corner, however solid our present world may appear to be.

Why the indies need Artificial Intelligence

This week, I was fortunate enough to address a large group of publishing industry leaders at the IPG (Independent Publishers Guild) Spring Conference in a wide-ranging discussion about Artificial Intelligence and its impact on a range of industries, including publishing.

It was encouraging to see so many publishers unjaded by the AI hype which has taken hold over the years and still as eager as ever to debate and explore the merits and benefits these technologies can bring to their organisations.

Attendees were keen to understand the intricate details about the ways in which AI could change our day-to-day working lives and what potential savings on resources, efficiency and budgets could be made as a result.

It may not be all that obvious to many, but I’ve always felt the indy sector has the potential to be something of a hotbed for pioneering new age technology like AI. While budgets might be smaller than larger publishers who can afford to splash vast amounts of money on innovations and technologies, there are a number of reasons why the AI revolution in publishing could start here.

Indies need AI

The small and medium size publishers who traditionally make up the independent publishing sector could arguably benefit more from AI embedded in the publishing workflow than any other sector. With tight profit margins, limited or stretched resources and manpower, and processes which often end up getting outsourced and freelanced out, AI can make editorial and production procedures far more efficient and cost-effective. For example, it’s now possible to take an unstructured manuscript as a Word document and run it through an ingestion process, which produces tagged and structured XHTML within a 5–10-minute period. This is a process which often takes publishers days to carry out and subsequently eats up several staff members’ time, time which can be better spent on other tasks, time which is ultimately money.

Money well spent

At the moment, most publishers, and especially those in the indy sector, are using human beings to carry out tasks which can be done by machine learning. Often, these tasks are off-shored to pre-press businesses, as I’ve mentioned, which result in a financial burden and significant overhead for many publishers. Automating these processes can help the conversion process from raw Word document manuscripts to tagged and structured XHTML, improving and enriching the metadata during this process. Once content is converted to XHTML, other previously manual processes can be carried out — for example, pushing the content into InDesign layouts removes up to 80% of manual intervention in this process.

When this is repeated across an entire list of books or journals, considerable cost and time savings can be made. We estimate that embedding AI in the workflow can free up about 40 per cent of employees’ time in production and editorial departments.

So not only can publishers recoup a lot of money spent with pre-press outsourcing, but they can also start to get the best out of their existing production and editorial staff who can let the AI do the heavy-lifting and mundane, repetitive work, and instead turn their focus to more business-critical, creative or higher-level tasks.

The indy publishing sector has a lot going for it. These scalable, dynamic businesses have all the potential to become innovators and forerunners in the AI race. The business case for incorporating AI and machine learning into indy publishing workflows is far stronger than the rationale for implementing most other technologies on the market. And it’s simply a matter of time before we see indies using AI in their workflows to become leaner, meaner, efficient and cost-effective organisations.

Everyday Rights Management for Publishers

Rights management for publishers seems to be a hot topic, with people extolling their virtues at conferences and think pieces released almost every month. But it might be time to pivot the conversations we have away from exclusively talking about Digital Rights Management and blockchain solutions to include the mundane, day-to-day work of rights management.

To start off, a useful primer from the World Intellectual Property Organization (available here) points out that there are at least four distinct asset types for which rights management is relevant:

  • Titles in the publishing house catalogue for the current year, as well as the backlist

  • Contracts with authors which grant the publisher the right to publish and sell

  • Sub-licensing

  • Publishing for other and different readers through digital means like print-on-demand, or digital format.

None of these is simple, of course — books contain copyrights for the text, illustrations, photographs, etc., each of which can be subject to different contracts. And apart from individual contracts, there often are laws governing intellectual property that need to be complied with, some international (like the Berne Convention for the Protection of Literary and Artistic Works) and others vary by country and type of use (like the EU’s Directive on Copyright in the Digital Single Market).

For reasons like this, it makes sense for publishers to invest in a system that helps maintain records of contracts, instead of relying on the surprisingly common approach of maintaining multiple excel sheets. The advantage of a specifically designed system is that it can be customized, allowing for publishing-specific functions. For example, assets can be tracked, helping keep track of usage across different editions. Since permissions are usually given for a certain number of uses, automatic prompts can ensure you are never in violation of the law.

In addition, the system can be made to follow rules that ensure compliance with the law, and these can be periodically updated. The people in charge of constantly updating the system based on new legal changes do not have to be the ones actually keying in information, allowing for the efficient division of labour.

All these issues concern only the storing of data on rights, and it’s valuable to stress this component for two reasons. First, a surprising number of publishers — and this cuts across size, region, genres published — still use dated systems of manual storage which could be updated with very little investment. Second, a lot of newer systems will have to be built on top of this basic system, which means unless there is a system in place, talking about more advanced tech would be moot.

Of course we do not want to stop at talking about rights storage, and so important topics to discuss will inevitably include options like Digital Rights Management (DRM) and contract management.

In brief: instead of just storing information, DRM refers to access control technology that sets limits to the use, modification, duplication, and distribution of copyrighted information. Individual assets can be embedded with metadata, making sure the information is available even outside the publisher’s system. For books, this can include software restrictions that control access of assets, such as Adobe Digital Editions’ proprietary DRM, Apple’s FairPlay DRM, and Amazon’s Mobipocket. DRMs aren’t without their critics — it has been argued that their use by the big six publishing groups helped Amazon monopolize the ebook market, but they do sound like our best bet to stem the tide of piracy.

As for contracts, as we have written before, smart contracts using blockchain can digitally facilitate, verify, and enforce an agreement between two parties in a transparent and trackable way. This technology is already being developed and implemented by publishing tech, meaning this is less about a theoretical possibility and more about shaping current tech.

Clearly, there are emerging technologies from which we cannot remove our gaze. But as exciting as ideas like DRM and smart contracts are, too often these are thought about in isolation instead of as components in a complex publishing ecosystem. To combat this, we need to contextualize these by thinking about the less shiny aspects of rights management — like the databases where rights managers work on a day-to-day basis. There’s an obvious temptation to fixate on the cutting edge of a field, but this might miss out on a lot of the everyday work of publishing professionals that might be less exciting, but no less essential.

What three studies tell us about automation in the workplace

One of the most popular topics we regularly tackle on this blog is automation, and the impact technology such as AI, Machine Learning and robotics is having, and will have, on the job market and the way we work.

In recent weeks, both in the US and UK, some interesting studies have been carried out on this hot topic by Pew Research Center and the Office for National Statistics (ONS), respectively. Meanwhile a survey entitled “Humans Wanted: Robots Need You” was conducted by recruitment company ManpowerGroup across 44 different countries, looking at the incorporation of bots into the working world and what this will mean for employees globally.

Dangerous and dirty

The Pew study examines the attitude of Americans in the light of increasing workplace automation, pulling together insightful charts and graphs from a range of public polls produced by the Center recently.

It concludes that while most Americans anticipate widespread disruption in the coming decades, few believe automation will affect their own job. Meanwhile, three quarters of Americans view job automation in a negative light with around half of respondents claiming automation has, to date, done more damage than good.

The general public is broadly supportive of automation replacing “dangerous and dirty” roles and is vehemently in favour (85 per cent) of seeing restrictions put in place to limit automation to only replacing jobs which are deemed too dangerous or unhealthy for humans.

Interestingly, when asked about whether the government or the individual should assume responsibility for helping workers who are displaced by the introduction of robots in the workplace, there was a split down the middle across party political lines.

1.5m jobs on the line

Meanwhile, across the pond, the Office of National Statistics (ONS) study states that 1.5m people in England are at high risk of losing their job. Having created a bot to analyse the jobs of 20m workers, ONS concluded that 7.4 per cent of these are at high risk of being replaced, with women assuming the highest risk by occupying 70 per cent of these roles.

There are some interesting correlations between these two studies. Both concur on the types of roles facing disruption — hospitality staff, retail assistants and sales workers top the high-risk list, while those working in medical professions and education are widely considered lower risk. Both studies also agreed that young people and part-time workers are particularly vulnerable to workforce automation.

Silver linings?

While these two studies paint an overwhelmingly bleak picture, the ManpowerGroup survey is, on the surface, far more optimistic in its outlook. The report’s overarching message is that humans and robots can coexist, and that automation needn’t be something to fear but something which will provide us with a wealth of new opportunities. It claims that 69 per cent of employers are planning to maintain the size of their workforce, while as many as 18 per cent actually want to hire more staff as a result of automation. To launch the study, Chairman and CEO of ManpowerGroup, Jonas Prising, said: “More and more robots are being added to the workforce, but humans are too.”

But if you scratch beneath the surface, the situation isn’t actually as peachy as they appear to want you to believe. The study states that “just” nine per cent of employers believe automation will lead to job losses. On paper that may not seem like a particularly high percentage, but in reality it is a very high number indeed.

Spin it how you want, automation will give with one hand and take away with the other, it will optimise some jobs and replace others, it will strike fear into some and have others in a state of excitable rapture. The world of work is changing around us as we live and breathe, and these interesting studies, however depressing they may be, offer useful insights and a valuable yardstick on the evolving attitudes of employers and workers during very uncertain times.

Last summer we discussed how automation is likely to affect different roles and tasks within the publishing ecosystem over the course of four blog posts. To find out how your job might be affected by the rise of the robots check out our The State of Automation series here: part 1, part 2, part 3, part 4.

Blockchain, Coming to a Computer Near You

Last year, Facebook was front page news when it came to light that Cambridge Analytica had obtained data on hundreds of millions of Facebook users through third-party apps. This week, Facebook CEO Mark Zuckerberg told ABC News that it is “still looking into” the claim that personal information for millions of users is easily available on Amazon.com Inc’s cloud servers. While Facebook is investigating this, what are users supposed to do? That is where blockchain might come into play.

Previously, I have written about blockchain and how it applies to publishers and content creation, but will this technology expand to help police look into how users interact with the internet and verify their identity as a whole? This week, while Zuckerberg was calling for Congress to regulate Facebook, PayPal invested in Cambridge Blockchain, a startup working to give individuals a way to own their own identity online. Akin to how blockchain allows bitcoin users to store value without a bank, blockchain may allow users to verify identity without an intermediary like Facebook.

While PayPal surely see this as something its users can benefit from for online financial transactions, this technology could have wider implications that provide safe interactions online for users of all kinds and change online communication and collaboration in a remarkable way. When you consider how many different corporate entities own our data — from banks to retailers, social media networks to airlines — we can see just how exposed we all are to data infringements, cyber-attacks, identity theft and fraud, especially as we don’t actually know just how robust and secure these companies’ data infrastructures actually are. As blockchain applications proliferate the marketplace we should start to see this balance redressed and consumers taking back control of their data. Though it’s still too early to tell what might happen in the future until the technology is used, this investment by PayPal should give users some peace of mind that they can protect themselves from identity theft in the future.

Why Preprint repositories are essential to academic work: A Case Study

There is a lot of talk about peer review and how it can be made better, but unfortunately, a lot of this happens at a level of abstraction that makes it easy to miss more modest changes that can go a long way. 

For example, a common way of proceeding in certain sciences is the pre-publication review, according to which manuscripts are uploaded online for open discussion before official peer review and journal acceptance, giving the community at large an opportunity to review results and methods. The advantage of such a process is that it makes the peer selection process far more transparent, but on the downside does not allow for anonymity for either author or reviewer. The downside might seem like it clearly isn’t worth it, since anonymity is accepted as an obvious virtue. But a real life case-study indicates why it might be worth the price.

A recent, real-life example of how a larger pool of peers might be more effective than two anonymous peer reviews can be found in a recent incident surrounding an arXiv submission. arXiv.org is a site for the submission of pre-prints of papers in Science and Math. In 2018, two researchers from the prestigious Indian Institute of Science, Dev Kumar Thapa and Anshu Pandey, posted a paper at arXiv, where they claimed to have discovered an instance of superconductivity at room temperature in “a nanostructured material that is composed of silver particles embedded into a gold matrix”. If true, this could have been a game-changer for material science and really, all of society since we could theoretically transfer electricity without any loss.

This pre-print caught the eye of a Postdoc at MIT, Brian Skinner, who probed into the data a little more and found some odd features:

Skinner wrote up his observations and posted it on arXiv himself. The story was quickly picked up on various sites, including Nature, Scientific American, and Wired. The authors, for their part, seem to have dug in their heels and have not admitted to any wrong doings.

Most relevant for the broader point about opening up peer review is that Skinner is not an expert in the field of superconductivity, so he probably wouldn’t have been a potential reviewer for the paper in question at all! And his decision to “zoom in closely” on the data isn’t a standard method for vetting papers, so if the pre-print hadn’t been posted somewhere relatively public, this discrepancy would have gone unnoticed, and the paper would have been published. The best case scenario then would be retraction.

Of course, there is the lingering question of whether such a model could be extended outside certain sciences. For example, it has been pointed out that medical journals might resist this because making results public prematurely might impede the ability to get proper press attention after full publication. And there are questions about whether the lack of anonymity at the preprocess stage would effectively do away with anonymity since the authors will already be known from the pre-print. So this is far from a knockdown argument. But I suspect one reason pre-prints aren’t more popular is simply that many people outside the sciences haven’t heard of them, but that at least can be addressed easily enough.

Trending now — AI ethics

In a significant move this week, Google announced the formation of an external global advisory council designed to offer “guidance on ethical issues relating to artificial intelligence, automation and related technologies”.

The Advanced Technology External Advisory Council (ATEAC) will consist of eight leading academics and policy experts from around the world, including former US deputy secretary of state William Joseph Burns, the University of Bath’s computer science professor Joanna Bryson and mathematician Bubacarr Bah, who will meet for the first time in April and on a further three occasions throughout the year.

This move doesn’t necessarily represent a sea-change in the tech giant’s policy and attitude towards AI ethics - the company had already established internal councils, panels and review teams to confront the challenges posed by AI and related technologies. Last June it published its seven guiding AI principals, outlining its approach towards the adoption of AI. However, notably it is the first time Google has sought worldwide expertise on AI to inform its overall strategy, and it will be interesting to see how this development impacts the company’s future business decisions, which have often come under a great deal of criticism.

Google is not the only tech powerhouse looking at ethics and how it goes about adopting, investing in and incorporating AI innovations. Perhaps coincidentally, just a day before Google launched the external advisory council at the MIT Technology Review's EmTech Digital conference, Amazon had revealed a collaboration with the National Science Foundation and a $10m cash injection to help develop systems based on fairness in AI. Meanwhile over at Microsoft, Harry Shum, executive VP of its AI and Research Group, had also announced at the very same conference that it will be adding “an ethics review focusing on AI issues to its standard checklist of audits that precede the release of new products”.

The discourse around AI, particularly coming from the heavy hitters in Silicon Valley, has certainly changed, that much is clear. And whether this is down to pressure being applied on these firms to adopt a less gung-ho and more measured approach as they slog it out on the AI innovation battlefield, remains to be seen.

But is it realistic to expect the likes of Google to genuinely care about AI ethics in so far as they are prepared to start prioritising these issues above their own sizeable business interests? This week the general mood at the summit in San Francisco was sceptical. Rishida Richardson, director of policy research for the AI Now Institute, was quoted in Reuters as saying:Considering the amount of resources and the level of acceleration that's going into commercial products, I don't think the same level of investment is going into making sure their products are also safe and not discriminatory.” 

While AI ethics may now be at the forefront of the agenda at conferences such as EmTech Digital, companies are still not being held accountable by the necessary regulation and legislation to keep them in-check and ensure that their roll-outs are responsible and ethical. In the absence of a single, global regulatory body operating in the field of AI, large tech firms are pretty much left to their own devices to self-regulate and develop AI-driven products and services without any directives or consequence. It’s a dangerous situation, and one which has led to several high profile, real world incidents whereby AI-based innovations have been rushed through and members of the general public have paid the price.

If we want the tech giants to offer more than lip service and tokenistic gestures on ethics in AI, maybe now is the time the industry should consider introducing independent regulation to enforce ethics rather than just talk about them.

BISG and PageMajik Survey Shows Publishing Workflow in Need of Rethinking

This piece was originally published in the Publishers Weekly Book Brunch London Book Fair Show Daily

When the digital revolution began over a decade ago, publishers were forced to examine their decades- old way of doing business. The move to digital forced publishers to look for dramatic ways to improve efficiency and keep up with a market they struggled to recognize. Unfortunately, the processes that followed were often a digital version of an existing system, barely improving productivity and, in some cases, creating additional unnecessary work.

To learn more about pain points in the publishing workflow, PageMajik and Book Industry Study Group (BISG) last fall partnered on a survey of publishing professionals. The goal: identify issues and offer workflow solutions that would help both the industry and individual publishers.

The survey revealed that 17% of respondents spend 25–50% of their time doing repetitive tasks, while 47% of respondents said repetitive tasks take up 10–25% of their time. Of those repetitive tasks, 58% of respondents felt that some of those tasks were avoidable. And, over half the respondents also said they could be more effective in their jobs if repetitive work was eliminated.

Among the largest time-wasting activities, according to respondents, were updating metadata, providing the same information in multiple reports, tracking projects in various formats, and outlining assignments.

A system that automates some of these processes would provide publishers with both efficiency and time. In turn, those publishers could focus on higher-level product development and related strategic work, such as acquisition, design, and promotion.

The conversation about workflow best practices doesn’t end with the survey or this article. On March 28th in New York, the Book Industry Study Group (BISG) will host a meeting focused on cloud-based workflows. Structured as an interactive, two-hour workshop, the program will solicit even more information about the challenges publishers and the book industry face.

PageMajik is also continuing to explore these challenges and share its views on how to address them. For more information about the survey or to discuss your particular workflow challenges and how we might help, please visit me at the PageMajik booth at Stand #3B08.

Jon White is the Global Vice President of Sales & Marketing at PageMajik.

Marshall Cavendish Education launches pilot with PageMajik

Leading Singapore-based education publisher Marshall Cavendish Education will be piloting PageMajik’s publishing workflow-based Content Management System. The roll out will happen in stages, upon the successful completion of the pilot.

Marshall Cavendish Education produces more than 400 curriculum-based titles each year, and, working with PageMajik, the publisher’s authors, editors and designers will be able to work together on one intuitive platform to improve collaboration, streamline workflows, and assist in meeting deadlines.

Richard Soh, Manager of Publishing Systems and Administration at Marshall Cavendish Education commented: “We are very excited about working with PageMajik. We anticipate that the product will dramatically improve the way we produce and publish content across the organisation, bringing more speed and efficiency into our publishing processes.”

Ashok Giri, CEO at PageMajik stated: “Marshall Cavendish Education has a magnificent history and heritage in education publishing and we are delighted to be working with the company to implement our product across their business. We are really looking forward to this collaboration and are confident that the PageMajik system will bring about positive change to the way Marshall Cavendish Education develop and produce content.”

 

About Marshall Cavendish Education

A subsidiary of Times Publishing Limited, Marshall Cavendish Education is the leading provider of distinctive K–12 educational solutions in Singapore, providing Singapore schools with innovative, high-quality content and solutions. 

For 60 years, Marshall Cavendish Education has constantly developed solutions.to ensure educational excellence and has earned the approval of the Ministry of Education, Singapore.

Headquartered in Singapore, Marshall Cavendish Education has offices in Hong Kong, China, Thailand, Chile and the United States. The brand is also recognised worldwide for its work in ensuring excellent educational standards and for continuously raising the quality of learning around the world, inspiring students and educators to learn and teach more effectively.

For more information, please visit www.mceducation.com.

 

About PageMajik 

We are a 40-member team comprising experienced industry professionals and tech wizards with relevant domain experience in both the publishing and the software development side. Our core team has worked with the publishing industry for a combined 10 decades and has been able to use the experience to develop a truly revolutionary product. We listen to the needs of our customers, and incorporate forward-facing ideas into the development of our solution. Our product is ever-changing as we are constantly trying to improve the experience for our users.

For more information, please visit www.pagemajik.com.

Scorecards as a Method to Tackle Submission Overloads

Information is easy to think of all-at-once, as though it were a single fluid somewhere on the internet. But when we start thinking about its materiality, we are forced to consider how it is processed in discrete quantities through multiple nodes. For publishing specifically, a feature that is simultaneously obvious and somehow under-appreciated is that the massive amounts of academic output we make use of depends on the labour of actual editors. This involves having to sift through submissions and make calls about whether to reject them, who to request reviews from, decide how to react to the reviews received, and make a final judgement on whether to reject, accept, or recommend re-submission.

This dependence on human editors with limited time means they act as gatekeepers to which manuscripts get the green light and which remain locked away in private drawers. One academic philosopher calculates that even if we make the conservative estimate of a steady number of 10,000 papers submitted every year, this dwarfs the 2,000 or so number of spaces available for publishing. This will mean 8,000 unaccepted in the first year which scholars try to publish the next year too, which means 18,000 submissions competing for 2,000 slots. And then 26,000, and then 34,00. A staggering number of submissions will have to be dealt with.

What’s worse, the calculation above assumed that there was a fixed number of submissions every year, and we know this isn’t true — as we’ve written before, an estimate from Lutz Bornmann and Ruediger Mutz in their 2014 paper Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references, there seems to be an increase in overall submissions of 8–9 per cent every year.

Editors cannot look at more than one submission at a time, no matter how much they wish they could. Delays are to be expected, but if submissions are made during the delay itself, then this hardly solves the problem. I’m sure editors use a number of strategies to try to deal with this problem, but I suspect that a fairly common outcome (intentional or otherwise) is differential attention paid to articles based on whether the editor knows the author or topic, whether the writing style is sophisticated, etc. In other words, there is already bound to be heuristics and rules of thumb to sift through submissions made. This isn’t meant to be criticism of editors, but an acknowledgement that our inability to process large amounts of information simultaneously means that we need methods to order information in processable ways. This is a perfect place for introducing AI.

Acknowledging that editors already have a variety of preferences means seeing that they are quite likely differ systematically with disciplines and idiosyncratically with personal taste. The system offered to score submissions will not be one that simply scores every paper according to pre-set metric, but can involve multiple customizable factors that include number of previous author submissions and the number of times the previous works were cited, relevance of the title and key terms to the discipline, the similarity of the topics discussed with articles previously published in that particular journal, etc. And the specific weight each of these factors should get in the score can also be set.

At first glance, this might seem like too coarse-grained a tool because we can think of all kinds of papers we might like which might have been ranked low by some of these metrics. For example, new academics will be at a disadvantage if previous citations are taken into account, work that breaks new ground will be set back because its topics might not match existing trends, and non-prosaic titles may suffer if they lose out in favour of titles which are more to-the-point (consider how the Historian Simon Schaffer, for example, has a paper on ship design hilariously titled “Fish and Ships”). These are real and serious concerns.

But there are three reasons I still think scorecards should be adopted anyway, First, as I’ve tried to emphasize, many of these tests are already being used by editors now. For example, submissions by celebrated academics are treated vastly differently compared to unknown grad students. This system just makes this explicit, and so holding the system to a higher standard to human editors seems unfair. Second, this making explicit of standards can force academics to coordinate publicly about what exactly they will look out for in submissions, possibly even making the entire process more transparent instead of the black box it so often is. Third, as submissions increase, editors already are going to have to choose where to focus attention. The question is whether they choose to opt for a procedure of looking at submissions in order of submission or randomly, or according to some specifiable metric.

It has to be remembered that this is only a sorting mechanism to decide the order in which the article should be read, and not intended to judge the quality of the article itself. There are still many questions and issues to address, but understood in this manner, it seems like a potentially vital tool to help deal with submission increases and regain some control.

Solving Indexing, one step at a time

Publishing is on the verge of exciting times. The promise of relatively new technology like machine learning, artificial intelligence, and Natural Language Processing makes it incredibly tempting to speculate on the new world we’ll soon be living in, including questions about which processes can be automated and whose jobs will be taken over. (We have even done some of the speculating ourselves, here and here).

While there is certainly a time for thinking carefully about large scale changes to our industry, I do fear that thinking only in terms of large scale changes makes us focus on the wrong questions — by constantly thinking in terms of abstractions and generalities, we can inadvertently ignore and fail to value the concrete.

Consider for example, the state of indexing. As any academic will tell you, indexing can be incredibly helpful for research. By listing major topics and the page numbers they are mentioned in, it allows readers to first decide whether a certain resource is what they are looking for by giving them a taste of the topics covered as well as a rough estimate of the extent to which they are addressed. And for research, a well-designed index enables people to narrow in on precisely the topic necessary, since obviously every resource cannot be read from scratch each time a paper or a book or a website entry needs to be written. The need for the index then is very real.

In addition, few people I talked to in publishing and in the world of academia think that the current indexing procedures work. A recent popular Twitter thread by historian and editor Audra Wolfe raised many issues I have been hearing about. She tweeted that professional indexers were essential for any academic who wasn’t knowledgeable about and competent at indexing, because otherwise the result was often “frustrating and unprofessional”.

In response, historian Bodie Ashton pointed out that early career researchers simply cannot hire indexers, and that if he had paid $7 per page for his first book, the indexing fee would have been a whole order of magnitude more than what he would have earned in royalties. Historian of technology Marie Hicks weighed in too, revealing that the turnaround time required by the publisher was too short to be able to hire an indexer. Moreover, they pointed out that it simply seemed unacceptable that anyone would need thousands of dollars to be able to produce an index that was professional.

I agree. This strikes me as a situation ripe for technological intervention— an indispensable job that costs too much and takes far too much time. The biggest obstacle to incorporating technology, however, is that expectations seem to skew too far in two directions. On the one hand, tech optimists seem to think we can come up with an indexing engine that will immediately replace professional indexers, saving them both time and money. Unfortunately, the work of indexing is not simply mechanical in a way that can be captured by a simple algorithm, but instead depends on skill that takes time to develop, and quite often also expertise in the discipline that the book belongs to. Unsurprisingly then, trying to replace human indexers wholesale results in unhappiness all around. Authors report being forced to live with clearly inadequate results or else having to redo the whole job themselves.

On the other hand, some people seem to over-correct and insist that indexing cannot possibly be improved, that we simply should accept the way things are. This kind of lapse into a fatalistic pessimism is sadly understandable. For some time now, there has been a standard story about how things play out: the unrealistic expectations of some about publishing tech leads to publishing tech advertising abilities they simply cannot deliver on, leading to disappointment all around. As this keeps repeating, of course publishers start to instinctively react to tech with skepticism. But given that there are real problems that need to be addressed — as the original tweets testify — this position isn’t sustainable either.

I believe the way out of this impasse is to recognise that this is in a very real way an artificial problem. Our talk of tech in terms of abstractions and generalisations only allows us to speak of progress in terms of binary states, as entirely successful or as entirely failing. Rather than fall for this, we need to stop asking whether a certain task can be automated or be performed by AI engines, and instead ask in what ways can tech actually help us, given where we are. Once we do this, we can start noticing that there are multiple products already that can assist indexing.

Keyword extractors that already exist may not be perfect but they can certainly generate a list of suggestions that can dramatically cut back on time, since authors or indexers will only need to remove unnecessary entries, add any left out, and tweak existing ones (for example, a case of synonyms or two different people with the same name accidentally classified as the same person). Statistical information about the frequency of terms can significantly ease indexing by showing the spread of a topic through the entire manuscript. And certain categories of keywords can be extracted better than others — proper names for example are far easier to identify than key concepts. And this is by no means the end of the line. I even predict engines intelligent enough to autogenerate keywords based on the kind of reader and subject area in the coming years.

Such a plan is undeniably ambitious, and will require quite a different fundamental attitude towards tech and change. But as one scholar wistfully writes about the task of indexing, an arrangement where publishers can take care of indexing well and quickly would be ideal. This can be made real, but only one step at a time.

Blame Watson: Real AI vs. Fake AI

The phrase “Artificial Intelligence” has become ubiquitous over the last several years and we know where to place the blame — on IBM’s Watson. From predicting the weather to playing Jeopardy to diagnosing patients, Watson, and thus AI, appears to be everywhere and apparently can do anything. No longer the terror that is HAL from “2001: A Space Odyssey,” the new perception of Artificial Intelligence is that machines can and already do help humans with virtually anything.

Because of the excitement around AI and the possibilities through using this technology, many companies are blurring the lines of what AI means in order to capitalize on the recent trend with both investors and consumers. Unfortunately, much of those claims are smoke and mirrors, causing customers to buy into fake AI systems. In order to not be one of those sucked into this trap, we first must outline what AI truly is.

Artificial Intelligence implies using a combination of neural networks and machine learning that provide insight, analysis, and action without human interaction or direction. Useful and autonomous AI eliminates the need for human intervention and interaction; the machine does all the work for humans, it doesn’t just provide insights. For example, a true AI system could ingest massive amounts of data, provide analysis of said data, and take the next step to action on that analysis. Instead, what many systems and services use is “machine learning.”

Machine learning, while good, still requires human interaction to provide the structure and the continually revised set of rules the machine uses in order to “learn.” While many of these systems are very good, if a company is seeking to eliminate this work entirely from their human workforce’s to-do list, this system would not be able to do that.

So, how to tell if the system you’re considering is truly autonomous and thus worthy of the investment.

· Does it require a human to manage the system?

· Is it something that requires months of on-boarding?

· Does the system actually do the work for you or does it just provide suggestions for what you then have to do yourself?

Before you buy a system make sure that it will actually improve your workflow for the better, not add another difficult layer of work for you and your colleagues to manage. The benefit of using AI is always to improve on the speed in which work can be done, exceeding what a human can do. If your system is not providing that service, it may be time to rethink it.

2019: Year of the Workflow

Aside from the flu, dieting fads and Blue Monday, for many in the publishing industry January can only mean one thing – it’s time to implement plans and budgets for the year ahead. But as the marketing, sales, editorial, acquisitions and rights teams all bid against each other for more lines in the budget, grappling for a greater slice of the pie, how much is left in the pot for innovation, investment in technology and long-term strategic and visionary thinking?

The answer more often than not, as you might expect, is very little indeed. Decision makers in publishing have traditionally been very reluctant to prioritise investment in new technologies, replacing legacy systems and adapting workflows, sticking with the status quo as opposed to rocking the boat and causing inevitable short-term disruption and anxiety among employees.

Complete system overhauls are extremely rare in publishing, particularly in the larger houses where the scale of cost and disruption is much more prominent. This means companies are often locked into deals with suppliers for decades, leaving them lumbered with archaic solutions which haven’t necessarily adapted with the times to suit their needs. While it’s far from an ideal situation that many in the industry are still using 20th century technology on a day-to-day basis, it is unrealistic to expect publishers to take big, drastic steps in order to change things, especially during times of political and economic uncertainty.

But this doesn’t mean that publishers are turning a blind eye to technology and innovation. Last year we spoke to hundreds of business leaders across all sectors of the publishing world, many of whom were increasingly open to adapting their internal workflows in an effort to boost efficiency and stem loss of revenue.

Why workflows, you may ask? Well, one of the main issues has been that, while most publishers are producing books and journals across all formats, the workflows embedded throughout publishing companies are still primarily print-first models. This means that the processes in place for bringing ebooks, online journals and audiobooks to market are often the same for print products, which traditionally require much longer lead times. A case study by Gutenberg Technology, published in March last year, revealed the benefits of switching to synchronous print/digital or digital-first workflows, claiming that 47 per cent of time can be saved and as much as 30 percent of costs can be saved” if publishers were to adopt this modern way of working.

These are compelling statistics, which most CEOs are not taking lightly. In an industry where there is a constant struggle to keep costs down, profit margins are wafer slim and market forces are working against us, publishers can no longer afford any unnecessary wastage in their supply chains and internal workflows. Streamlining workflows and looking at how many tasks across the publishing business can be automated thanks to innovative new technologies is what industry leaders are now turning their attention to as strategy du jour.

So, while I don’t expect 2019 to be a year when publishers revolutionise the way they use technology and do business, I do believe it will be one where we take baby steps towards a smarter and more agile way of working. And technology will play a vital role in shaping the workflows publishers increasingly choose to adopt in the not so far future.

A More Efficient System: A Look Ahead at 2019

Last year on the blog, we highlighted several ways in which technology is influencing and changing content industries. From newspapers to book and journal publishing, music to fine art, technology is speeding up processes, streamlining workflow, helping with discovery, creation, and fact-checking content, and improving the way we reach customers. What we also discussed is how, in many ways, these changes will impact those working within these industries.

As we look ahead to 2019, I want to emphasize some of the key changes we can anticipate this year to help prepare for the future of publishing and align our industry better with the changes that are going on in the world around us.

Artificial Intelligence

Even though we see artificial intelligence in our day-to-day lives, there continues to be some knee-jerk wariness on the part of publishers. Because Artificial Intelligence is uncharted territory, publishers aren’t alone. In a survey of some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists conducted in the summer of 2018 by Pew Research, the experts predicted “networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities.”

Though AI on a larger, global scale should be taken in a slow, plodding way to ensure proper implementation and protection for the humans involved in each industry, on a smaller scale in publishing, AI can help improve workflow and systems, allowing for humans to engage in fewer mundane, time-consuming tasks and more high-level, creative pursuits.

Workflow Solutions and Automation

AI and machine learning are used to help automate some repetitive tasks in publishing. Last year, we ran a series on the State of Automation, which was highlighted by The Bookseller, outlining how automation is being implemented in the world around us—from retail sites to healthcare—and how it will impact the publishing industry on a granular level. Though automation is still very much in its infancy in publishing, it has the potential to be one of the more disruptive changes in the foreseeable future.

By automating many systems, some departments, such as production, editorial, and rights may have radically different workloads and responsibilities. By automating some of these systems, we could free up these departments to expand their roles into new, creative areas.

Blockchain in Publishing

Blockchain became the hot topic last year. Blockchain is a decentralized, digitized series of information blocks shared in a peer-to-peer network and the technology behind the cryptocurrency Bitcoin. For academic publishers, blockchain seems to be the most viable way to chart research, peer review, and dissemination of information.

Just last week, Dutch publishing consultant Sebastian Posth released a paper entitled “What Is Blockchain: Why and How Should the Industry Care?” comparing the shift to blockchain and cryptocurrencies as “significant as the shift that happened with the emergence of the internet.” Posth illustrates how blockchain can help publishers and other media with piracy, payment, expanding the ability to reach customers better, but blockchain will also “confront publishers with new, inherent obstacles and questions: about identity and governance; about laws and regulations; about transactions and revenue models; about crypto-currencies and currency-conversions; about crypto-economics and financial incentives; about censorship and borders – and a lot of things they might have never thought of before.”

AI in publishing production - Simply a question of “when”

Book publishing has always been an industry of tight margins. Particularly in the world of the printed book, publishers have always found themselves at the mercy of overheads which are well beyond their control. From paper and ink costs to fluctuating global currencies and transportation outlays, publishers’ profits have traditionally ebbed and flowed based on external factors, and we haven’t even mentioned challenges with the retailing environment and evolving reading habits.

This was a major concern among many of the C-level executives I met with at the Frankfurt Book Fair in October, and then more recently at other conferences in the US and UK. On the one hand they were happy and defiant that the printed book had apparently held strong in the midst of challenging trading conditions while having fought off a range of disruptive elements in the marketplace, but on the other they were showing a growing concern about rising costs associated with physical product, and evidently feeling the pinch.

Lean and mean

Books are an expensive business. However, most of the people I spoke to at Frankfurt were insistent that passing these increasing costs onto the consumer was not an option they were willing to explore. Yet they were extremely keen to hear about ways in which technology trends such as AI could enable them to streamline efficiencies across the business to reduce operational expenditure.

More than ever, directors of publishing houses are looking for ways to make their organisations leaner and meaner, and to ensure that they’re not overspending and overstaffing, in an effort to recoup some of these spiralling overheads. And many are becoming increasingly aware of the fact that the new wave of technologies being made available will help them to do exactly that.

How soon is now?

Several months ago, on this blog, we explored how automation is likely to impact various different roles within publishing. We concluded that where technologies such as AI would likely have the most significant short-term impact would be within the production and editorial departments and that we can expect publishers to start rolling out AI-based technologies into their workflows within the next two years or so.

Some in the industry were quick to dismiss this prediction, stating that they just couldn’t see publishers implementing AI-driven technologies, in any part of the business, in the immediate future. However, judging from these exchanges in Frankfurt I am now more convinced than ever that leaders are already prepared to take a long, hard look at how technology can help them to optimise their business processes.

First past the post

Earlier in the year we suggested that by introducing machine learning into the publishing workflow, particularly across pre-production and editorial departments, publishers can free up around 40 per cent of the time that is spent on manual tasks. In my mind this is a conservative figure, especially when you consider how much production resource is put into formatting, layout, typesetting and proofing - all highly automatable tasks which machine learning-driven technology can undertake.

The technology is there, and the business case has been identified. Now that publishing leaders are starting to really take a keen interest in how this technology works and how it can be applied across their organisations to boost efficiencies, make savings and drive revenue, it’s not a question of “if” but “when” and “how far” they want to go with it. Either way, the production department will certainly be the first to witness AI in action, and the benefits of this transition will be immediately felt all around the business.

Last Chance to Participate in Our Workflow Survey

We have partnered with the Book Industry Study Group (BISG) on a survey of publishing professionals to tell us where they struggle for time in their daily work lives — what work takes up the most time? what could you focus on if you had unlimited hours? how do you see the future of your role in publishing?

In talking to publishing professionals about their jobs, we hope to better understand where the industry is going and how we can provide solutions for challenges we face in our daily work lives.

We are closing the survey at the end of the year and we would love to hear from you. Please go here to tell us what your challenges are and how we can help you.

Discovery, Efficiency, and Better Research Tools: How PageMajik Can Work for Libraries

Open access and the recession have changed the landscape of library budgets and usage over the last 10 years. Library book and journal budgets have decreased; huge volumes of open access content exist, but there is no quality control or easy way to discover research; and the rise of new university presses publishing monographs, conference proceedings, and other content are trying to do so with the same staff and on a shoestring budget.

Last month, The Charleston Conference gathered together librarians, publishers, electronic resource managers, consultants, and vendors to discuss issues such as these and others, to chart a way forward, and to bring together companies who are working in that space to share some services that might be helpful to libraries as their roles continue to change.

The Charleston Premiers portion of the conference in which publishers and vendors showcase their newest and most forward-thinking products that may not be well known to the audience as a whole. The audience then votes to select their favourites in a variety of categories. We were pleased to have PageMajik selected as “Most Innovative Product” by the audience.

“For several years now the Charleston Premiers, which previews new and noteworthy products and innovations on the marketplace, has been gaining popularity at the Charleston Conference, particularly due to its fun, quick-fire pitching format and audience interaction,” said Anthony Watkinson, Director of the Charleston Conference. “This year delegates to the conference were particularly impressed by PageMajik’s pioneering approach towards improving publishing workflows and its innovative application of new tech such as AI, and I’d like to congratulate the company on winning our Most Innovative Product.”

PageMajik was developed out of our 40 years of experience working with publishers and libraries to understand the challenges that come with reduced budgets, small staffs, and vast amounts of information to sift through via open access.

What we discovered at the Charleston Conference was that there are many ways PageMajik can be useful to libraries. Most specifically, as libraries enter into the publishing side of the industry, using machine learning to tackle repetitive, time-consuming, expensive aspects of the publishing process, allows libraries and new university presses to free up 40% of the time spent on manual editorial and production tasks to focus on higher level work. Another, more traditional use of PageMajik is through the automatic meta-data tagging and analysis the system provides and which offers vastly improved discovery in the sea of content, cutting research time in half and making those research results more fruitful.

The team at PageMajik prides itself on its innovative approach to radical improvement, increased speed and cost reduction within the editorial workflow. As we work with libraries more, we are eager to find other ways we can help improve their processes. For more information or to tell us your particular challenges, please go to www.pagemajik.com.

No Winter of Discontent in Newsrooms

As the days grow shorter and the nights grow longer, it’s beginning to feel a lot like winter. But, will this cold season mean cold feet when it comes to AI investment and roll outs, as some are predicting — or in other words will this be another “AI Winter”?

There is no denying that there is, still, a lot of hype around AI. And with this hype comes inevitable disillusionment when some of the bold statements, commitments and trials don’t pan out as expected.

Many industries and companies experience ‘AI fails’ when projects aren’t properly planned out, are rushed through, are done for the wrong reasons, are not scalable, or are not supported by the correct infrastructures. Recently, for example, the automotive industry was dealt a blow when deep learning powered self-driving car experiments didn’t go to plan, setting progress back years.

Peaks and troughs

These peaks and troughs of enthusiasm and disappointment are characteristics of pretty much every major technological disruption in history, and part and parcel of the hype cycle, a concept famously created by IT analysts Gartner, whose basic graphical illustration helps to explain this phenomenon.

Some industries, and some companies operating within them, are further along the AI hype cycle than others. Arguably book publishing is at the very beginning of this process, so yet to experience a “peak of inflated expectation” let alone a “trough of disillusionment” or “AI Winter”, for that matter.

Early adopting cousins

Interestingly, one of the most advanced and progressive industries for innovative AI applications is the newspaper and magazine publishing industry. Our cousins have been experimenting and rolling out machine learning initiatives since 2013 when the Associated Press became an early adopter, automating formulaic business and sports reporting.

Two years later the New York Times implemented an AI project called Editor to help journalists reduce labour-intensive tasks such as research and fact-checking. In 2016, the Washington Post trialled “robot journalism” at the Rio Olympics using Heliograf software, which analysed data and produced news stories. And last year Reuters launched its News Tracer product, which uses machine learning to sift through social media outlets for legit breaking news. Finally, just a few days ago, Quartz announced the launch of the Quartz AI Studio, a new tool to help journalists around the world use machine learning to report their stories.

Forced hands

There are good reasons why newsrooms in particular have been so quick to innovate and experiment with AI, arguably reaching the “Plateau of Productivity” on Gartner’s hype cycle long before others. The tumultuous, cash-strapped sector has faced severe disruption in the form of migration to digital, changing consumer purchasing and reading habits, and a complete shake-up of the traditional business and revenue models which had existed for years (so not too dissimilar from the evolution of book publishing, but at breakneck speed). Pew Research reported that in the space of just 10 years newsroom employment at US newspapers dropped by nearly a quarter. There has never been more pressure on editorial teams to work more efficiently and deliver more with less resources.

In the face of such extreme circumstances and weakening financial conditions for media publishers, AI is clearly seen as a knight in shining armour, helping newsrooms to work harder, faster and smarter. And it just so happens that journalism, not traditionally seen as a hotbed of innovation, is the perfect testing ground for AI projects.

Lessons to learn

So, what can the book publishing industry learn from its cousins and their early adoption of AI technologies, given that we potentially have the benefit of a slower curve of disruption? If we look at where AI is being introduced in newsrooms, we can see most of the implementations are launched to boost efficiencies. Not necessarily to replace journalists on any meaningful scale, but to assist them in their roles, and take care of the more mundane and repetitive aspects of their roles, so they can focus on bigger and better things.

As Uber, Tesla and others within the automotive industry are learning, ambitious AI and machine learning projects can be incredibly risk averse and long, frustrating processes. Yet, as many newsrooms can now attest, workflow-based AI projects, which are innovative while scalable, useful and well-grounded can be incredibly effective and make all the difference. It’s realistic that the book industry will start to see AI applications rolling out over the next few years, and judging from the experiences of our cousins, these AI rollouts will be most successful when embedded in our workflows.

— 20 Items per Page
Showing 1 - 20 of 64 results.
Subscribe!