AI in publishing production - Simply a question of “when”

Book publishing has always been an industry of tight margins. Particularly in the world of the printed book, publishers have always found themselves at the mercy of overheads which are well beyond their control. From paper and ink costs to fluctuating global currencies and transportation outlays, publishers’ profits have traditionally ebbed and flowed based on external factors, and we haven’t even mentioned challenges with the retailing environment and evolving reading habits.

This was a major concern among many of the C-level executives I met with at the Frankfurt Book Fair in October, and then more recently at other conferences in the US and UK. On the one hand they were happy and defiant that the printed book had apparently held strong in the midst of challenging trading conditions while having fought off a range of disruptive elements in the marketplace, but on the other they were showing a growing concern about rising costs associated with physical product, and evidently feeling the pinch.

Lean and mean

Books are an expensive business. However, most of the people I spoke to at Frankfurt were insistent that passing these increasing costs onto the consumer was not an option they were willing to explore. Yet they were extremely keen to hear about ways in which technology trends such as AI could enable them to streamline efficiencies across the business to reduce operational expenditure.

More than ever, directors of publishing houses are looking for ways to make their organisations leaner and meaner, and to ensure that they’re not overspending and overstaffing, in an effort to recoup some of these spiralling overheads. And many are becoming increasingly aware of the fact that the new wave of technologies being made available will help them to do exactly that.

How soon is now?

Several months ago, on this blog, we explored how automation is likely to impact various different roles within publishing. We concluded that where technologies such as AI would likely have the most significant short-term impact would be within the production and editorial departments and that we can expect publishers to start rolling out AI-based technologies into their workflows within the next two years or so.

Some in the industry were quick to dismiss this prediction, stating that they just couldn’t see publishers implementing AI-driven technologies, in any part of the business, in the immediate future. However, judging from these exchanges in Frankfurt I am now more convinced than ever that leaders are already prepared to take a long, hard look at how technology can help them to optimise their business processes.

First past the post

Earlier in the year we suggested that by introducing machine learning into the publishing workflow, particularly across pre-production and editorial departments, publishers can free up around 40 per cent of the time that is spent on manual tasks. In my mind this is a conservative figure, especially when you consider how much production resource is put into formatting, layout, typesetting and proofing - all highly automatable tasks which machine learning-driven technology can undertake.

The technology is there, and the business case has been identified. Now that publishing leaders are starting to really take a keen interest in how this technology works and how it can be applied across their organisations to boost efficiencies, make savings and drive revenue, it’s not a question of “if” but “when” and “how far” they want to go with it. Either way, the production department will certainly be the first to witness AI in action, and the benefits of this transition will be immediately felt all around the business.

Last Chance to Participate in Our Workflow Survey

We have partnered with the Book Industry Study Group (BISG) on a survey of publishing professionals to tell us where they struggle for time in their daily work lives — what work takes up the most time? what could you focus on if you had unlimited hours? how do you see the future of your role in publishing?

In talking to publishing professionals about their jobs, we hope to better understand where the industry is going and how we can provide solutions for challenges we face in our daily work lives.

We are closing the survey at the end of the year and we would love to hear from you. Please go here to tell us what your challenges are and how we can help you.

Discovery, Efficiency, and Better Research Tools: How PageMajik Can Work for Libraries

Open access and the recession have changed the landscape of library budgets and usage over the last 10 years. Library book and journal budgets have decreased; huge volumes of open access content exist, but there is no quality control or easy way to discover research; and the rise of new university presses publishing monographs, conference proceedings, and other content are trying to do so with the same staff and on a shoestring budget.

Last month, The Charleston Conference gathered together librarians, publishers, electronic resource managers, consultants, and vendors to discuss issues such as these and others, to chart a way forward, and to bring together companies who are working in that space to share some services that might be helpful to libraries as their roles continue to change.

The Charleston Premiers portion of the conference in which publishers and vendors showcase their newest and most forward-thinking products that may not be well known to the audience as a whole. The audience then votes to select their favourites in a variety of categories. We were pleased to have PageMajik selected as “Most Innovative Product” by the audience.

“For several years now the Charleston Premiers, which previews new and noteworthy products and innovations on the marketplace, has been gaining popularity at the Charleston Conference, particularly due to its fun, quick-fire pitching format and audience interaction,” said Anthony Watkinson, Director of the Charleston Conference. “This year delegates to the conference were particularly impressed by PageMajik’s pioneering approach towards improving publishing workflows and its innovative application of new tech such as AI, and I’d like to congratulate the company on winning our Most Innovative Product.”

PageMajik was developed out of our 40 years of experience working with publishers and libraries to understand the challenges that come with reduced budgets, small staffs, and vast amounts of information to sift through via open access.

What we discovered at the Charleston Conference was that there are many ways PageMajik can be useful to libraries. Most specifically, as libraries enter into the publishing side of the industry, using machine learning to tackle repetitive, time-consuming, expensive aspects of the publishing process, allows libraries and new university presses to free up 40% of the time spent on manual editorial and production tasks to focus on higher level work. Another, more traditional use of PageMajik is through the automatic meta-data tagging and analysis the system provides and which offers vastly improved discovery in the sea of content, cutting research time in half and making those research results more fruitful.

The team at PageMajik prides itself on its innovative approach to radical improvement, increased speed and cost reduction within the editorial workflow. As we work with libraries more, we are eager to find other ways we can help improve their processes. For more information or to tell us your particular challenges, please go to www.pagemajik.com.

No Winter of Discontent in Newsrooms

As the days grow shorter and the nights grow longer, it’s beginning to feel a lot like winter. But, will this cold season mean cold feet when it comes to AI investment and roll outs, as some are predicting — or in other words will this be another “AI Winter”?

There is no denying that there is, still, a lot of hype around AI. And with this hype comes inevitable disillusionment when some of the bold statements, commitments and trials don’t pan out as expected.

Many industries and companies experience ‘AI fails’ when projects aren’t properly planned out, are rushed through, are done for the wrong reasons, are not scalable, or are not supported by the correct infrastructures. Recently, for example, the automotive industry was dealt a blow when deep learning powered self-driving car experiments didn’t go to plan, setting progress back years.

Peaks and troughs

These peaks and troughs of enthusiasm and disappointment are characteristics of pretty much every major technological disruption in history, and part and parcel of the hype cycle, a concept famously created by IT analysts Gartner, whose basic graphical illustration helps to explain this phenomenon.

Some industries, and some companies operating within them, are further along the AI hype cycle than others. Arguably book publishing is at the very beginning of this process, so yet to experience a “peak of inflated expectation” let alone a “trough of disillusionment” or “AI Winter”, for that matter.

Early adopting cousins

Interestingly, one of the most advanced and progressive industries for innovative AI applications is the newspaper and magazine publishing industry. Our cousins have been experimenting and rolling out machine learning initiatives since 2013 when the Associated Press became an early adopter, automating formulaic business and sports reporting.

Two years later the New York Times implemented an AI project called Editor to help journalists reduce labour-intensive tasks such as research and fact-checking. In 2016, the Washington Post trialled “robot journalism” at the Rio Olympics using Heliograf software, which analysed data and produced news stories. And last year Reuters launched its News Tracer product, which uses machine learning to sift through social media outlets for legit breaking news. Finally, just a few days ago, Quartz announced the launch of the Quartz AI Studio, a new tool to help journalists around the world use machine learning to report their stories.

Forced hands

There are good reasons why newsrooms in particular have been so quick to innovate and experiment with AI, arguably reaching the “Plateau of Productivity” on Gartner’s hype cycle long before others. The tumultuous, cash-strapped sector has faced severe disruption in the form of migration to digital, changing consumer purchasing and reading habits, and a complete shake-up of the traditional business and revenue models which had existed for years (so not too dissimilar from the evolution of book publishing, but at breakneck speed). Pew Research reported that in the space of just 10 years newsroom employment at US newspapers dropped by nearly a quarter. There has never been more pressure on editorial teams to work more efficiently and deliver more with less resources.

In the face of such extreme circumstances and weakening financial conditions for media publishers, AI is clearly seen as a knight in shining armour, helping newsrooms to work harder, faster and smarter. And it just so happens that journalism, not traditionally seen as a hotbed of innovation, is the perfect testing ground for AI projects.

Lessons to learn

So, what can the book publishing industry learn from its cousins and their early adoption of AI technologies, given that we potentially have the benefit of a slower curve of disruption? If we look at where AI is being introduced in newsrooms, we can see most of the implementations are launched to boost efficiencies. Not necessarily to replace journalists on any meaningful scale, but to assist them in their roles, and take care of the more mundane and repetitive aspects of their roles, so they can focus on bigger and better things.

As Uber, Tesla and others within the automotive industry are learning, ambitious AI and machine learning projects can be incredibly risk averse and long, frustrating processes. Yet, as many newsrooms can now attest, workflow-based AI projects, which are innovative while scalable, useful and well-grounded can be incredibly effective and make all the difference. It’s realistic that the book industry will start to see AI applications rolling out over the next few years, and judging from the experiences of our cousins, these AI rollouts will be most successful when embedded in our workflows.

Is it time to open up Peer Review?

Peer review is arguably the keystone of academic publishing, with reviewers serving as gate keepers of legitimacy and tasked with ensuring that standards are maintained and trust in the field is sustained. It is also, for the most part, a thankless job. This might be about to change.

The practice of peer review probably kicked off when Henry Oldenburg, Secretary of the Royal Society in the mid-17th century, sent out manuscripts to experts for vetting before publication. Since then, peer review has gotten more institutionalized, but the form itself has been remarkably stable: an editor sends out a manuscript to a handful of experts in the relevant sub-domain, and if the experts green-light it as a valuable academic contribution, it is published.

Anonymity is crucial to how this system works. The peer reviewer is unnamed so that they can offer honest evaluations about the quality of a submission without fear of retaliation, particularly if the author is someone with clout. The identities of authors are also kept anonymous, so reviewer judgments aren’t influenced by personal relationships, both warm and cold. Of course, academics present their work at conferences and some sub-fields are so small that everyone knows what everyone else is working on, but for the most part secrecy is taken seriously and respected for what it makes possible.


As of late, an increasing chorus of scholars are questioning whether this accepted wisdom about the importance of anonymity in peer review is actually as wise as it might initially appear. For example, Caroline Schaffalitzky de Muckadell and Esben Nedenskov Petersen argue both that papers accepted for publication should be published along with their peer reviews, and that the reviews published “should include not only reviews from the journal accepting the paper, but also previous reviews which resulted in rejections from other journals”.

It should be conceded that this is a fascinating proposal for a number of reasons because earlier peer reviews and corrections are also vital parts of how academia works, and to simply sweep them under the rug makes this whole aspect opaque and obscure. Plus, peer reviewing is notoriously arbitrary, as is captured well by the meme of the capriciously cruel “#Reviewer2”. John Turri from the University of Waterloo who argues that maintaining anonymity keeps academic disciplines from developing open norms about what is worth publishing, frequently leading to frustration when a submission is rejected on seemingly arbitrary grounds. If we want honest discussions about the state of a field, we need to be able to see what standards are actually being employed to determine what gets published in its journals.

As great as these arguments are, a worry is that the persistence of peer review anonymity possibly undercuts all the possible benefits they advocate for. For example, one scenario de Muckadell and Petersen are keen to avoid is that of unqualified or abusive reviewing. According to this, people know their reviews are going to be made public, so they’ll take care to not review abusively. But if reviewers know they’ll remain anonymous regardless of how abusive their reviews are, it isn’t clear why they would be motivated to change their behaviour. So although the authors might still succeed in their aim to “put forward [reviewers’] arguments for public scrutiny”, this might not be quite enough to elevate the quality of reviews or decrease the incidence of abuse.

Considerations like these have led to a more radical proposal — remove peer review anonymity.


At first encounter, this might seem like a dangerous suggestion. After all, how can early career academics, especially those with vulnerable employment, critique honestly and even harshly, when it is called for? But if we look past this first reaction, two strong arguments can be found for this position.

For one, there finally will be motivation for people to write more responsibly, since they can claim credit for well-written and well-argued review but obviously cannot for abusive ones. While it is true that a reviewer who has no interest at all in using peer review history as academic credentials might still continue being abusive, this should help better things significantly.

Second, it should be pointed out that peer review still is academic work, even essential academic work. As Justin Weinberg from the University of South Carolina points out, “my sense is that the credit one gets for peer reviewing is disproportionately small compared to how important peer reviewing is for the academic enterprise”. To give people the ability to take credit for work well-done then is a matter of fairness.

An interesting model for how this might be done, without stripping anonymity from reviewers without their permission is the website Publons. This collects information on a voluntary basis from peer reviewers and verifies this with the journal publisher. This allows for the creation of reviewer profiles that each reviewer can claim claim credit for and add to their CVs.

With any solution there are those skeptical. David Roy Smith from Western admitted that as an early career academic, he simply hadn’t had the opportunity to review many papers, especially from prestigious journals, and so he wasn’t all that eager to sign-up. In addition, there’s the perpetually relevant question about whether the endless march to quantifying and comparing work done and its impact is actually good for academia.

Still, the removal of anonymity in peer review, voluntary for now, seems to be the direction we’re travelling and so we need to take it seriously. Universities and funding organizations need to incorporate the now-public data about peer reviews performed into their decision making, and choose whether people without publicly recorded reviews will be penalized or not. Publishing and publishing tech need to incorporate the ability to transfer and approve finished peer reviewers quickly to standard sites, so workflows don’t get cluttered.

The opening up of Peer Review is bound to be a momentous transformation in what is now a procedure that hasn’t changed much in centuries, so who knows where we’ll end up.

A Survey on Workflow and Automation

Over the last year since PageMajik launched, we have spoken to hundreds of publishers about their workflow challenges learning about: how much time they spend on repetitive tasks, how this impacts on time management, and what are the main barriers preventing them from launching a product into the marketplace in a timely fashion.  What we have found in our conversations is that, more often than not, there are old, legacy systems in place that are greatly hindering the efficiency and potential revenue for publishers. And, when new modules are created, these are based on old technology and don't adapt to new innovations that are being utilized in other industries.

With that in mind, we are delighted to announce that we have partnered with the Book Industry Study Group (BISG) to expand our conversations to a larger scale to gain an even better understanding of the challenges publishers face and how these business critical workflow issues can be resolved.  BISG and PageMajik have put together this survey for publishers of all sizes to identify trouble areas in the workflow, to highlight where technology might be vital, to gauge attitudes towards automation and to reveal how publishers feel automation might be of benefit in their role.

To participate, please click here and share your experiences.

How to keep publishing tech from “Locking-in” academics

Scholarly publishing has recently been beset by fears that large publishing companies are creating end-to-end publishing platforms that would unfairly create dependence and entrench monopolies, by providing services to academics which become the standard. Smaller publishers unable to afford the kinds of acquisition of the behemoths will simply be powerless to compete with the ease and efficiency of, say, the submission system for both academics and publishers. As more of the market is captured by the large publishers, they then get more power over the terms of contracts, prices, etc.

The presence of different-sized competitors in a marketplace always raises concerns about the sustainability of the smaller players. After all, their bigger counterparts have more resources to pour into R&D to create more efficient tech and processes, thus giving them more of a market advantage, helping them get even bigger. In cycles which can be considered virtuous or vicious, depending on your standpoint, bigger organizations always threaten the tenability of the smaller ones. The reason we can’t just resign ourselves to this dynamic is that the monopolization of the field by a smaller and smaller number of competitors allows the survivors to dominate the field, allowing them to effectively unilaterally fix the terms of contracts everyone else is forced to abide by.

The usual response to domination by large publishers is to advocate for open science, but Open Access can’t really help here because OA publishers need to be competitive as much as anyone else. If the most effective tools are found exclusively in journals by large for-profit publishers, then even researchers who might otherwise be sympathetic to OA initiatives might opt to publish elsewhere.

But unlike the standard case of “Big Deal” packaging by large publishers, there is one crucial difference in the case of workflow management solutions — competition is possible. Unlike access to journal articles which publishers control, there are plenty of tech companies already dedicated to producing solutions for publishers. Instead of naively hoping that large publishers abstain from engaging in an arms race over tech, we can slightly less naively hope to maintain best-practice guidelines that tech companies are required to abide by. These could include:

  1. Keeping your entire system as modular as possible, so the system architecture can’t by itself straitjacket libraries into an all-or-nothing choice.
  2. Proprietary file formats are kept to a minimum to allow hassle-free disengagement from the system if required.
  3. Different bands of pricing options to ensure small and medium-sized publishers can access atleast a bare bones system.

Why would publishing tech agree to this instead? For one, and most idealistically, most people in publishing tech are themselves invested in the health of academia. But for the few who might need a little motivation, the scholarly community at large in the age of social media can make clear what tech it will be willing to work with. Soft pressure and shaming, especially in the age of social media might just be enough for such an ambitious endeavour.

Of course, we can’t be sure such a tactic would even be feasible or effective, particularly given that larger publishers can simply acquire tech companies. But given where we are today, ensuring that publishing tech is willing to help resist publishing monopolies might very well be our best shot at keeping the marketplace competitive. This isn’t going to make resource and size disparities disappear, but it just might ensure that everyone plays by the same rules.

Humans are Afraid of AI, but Why?

In this blog, we write a lot about the future of publishing with the introduction of machine learning and artificial intelligence to help automate repetitive tasks and make workflow more efficient. We also highlight that there is still resistance in the industry, and the world at large, for embracing technology due to fears about machines taking human jobs, but what is at the heart of that fear? And should we give into it by regulating how much we implement machine learning into our workflows?

In a recent article in Fast Company on the need for AI, writer Robert Safian shared a colleague’s mantra, “Everything in an organization that can be done by machines should be done by machines — efficiency dictates it. But everything that needs to be done by humans must be done by humans. The defining characteristics of an enterprise — those involving ethics, judgment, creativity, and compassion — require a human touch.” Using as an example of an instance in which a human touch was needed, the article highlighted the recent decision by Nike to feature controversial NFL star Colin Kaepernick in its “Just do it” campaign. At the center of debate that has extended beyond the National Football League and its fans into the very center of US power — The White House, Kaepernick, on paper, does not seem like a logical choice for a spokesperson. A machine would never have selected him from a list of choices. But, as a representative of what Nike stands for “to bring inspiration and innovation to every athlete,” Kaepernick, who stood up for something he believed in and sacrificed his career, fit perfectly. Only a human could see the potential and only a human could have made that decision.

A Pew Research study conducted last autumn showed that 72% of respondents are worried about a future in which machines are able to do many jobs currently held by humans. The study went on to outline how humans want to place restrictions on how and when and how much machines are involved in an organization, “in the event that robots and computers become capable of doing many human jobs, for example, 85% of Americans are in favor of limiting machines to performing primarily those jobs that are dangerous or unhealthy for humans.”  In addition, respondents were in favor of putting restrictions on how many jobs a company could replace with machines, still giving jobs to humans even if a machine are capable of doing them faster, or providing guaranteed pay for humans, even if a machine was doing the work. 

It’s clear from these results that humans are concerned about machines coming into the workplace because they will take their jobs, but we are ignoring the second part of that concern. Humans are afraid of adapting. Whether that means adapting from a system they are comfortable with or to a world in which they must become more creative, focus on the bigger picture, which may require more focused thinking and energy, is unclear. Machines offer the opportunity to stop doing mundane tasks and embrace more creative, thoughtful pursuits and ideas. Why are humans afraid of that? We’d love to hear your thoughts on the issue.

A Week in the Life of Blockchain

It’s hard to ignore the omnipresent buzz surrounding Blockchain. It is mentioned in every media outlet we consume, it infiltrates seminars and conferences we attend for work, it’s a constant on everybody’s social media feeds and it pops up in conversation all too often.

And as the noise around blockchain increases to almost deafening levels, so too does the polarity between blockchain’s opposing factions, with evangelists and naysayers alike shouting ever louder. If we take a look at just a small selection of articles which have appeared in the media over the last few weeks alone we can observe how a remarkably conflicted and jarring landscape is starting to develop.

Blockchain is “useless”

This week at the Blockshow Conference in Las Vegas, economist and renowned crypto-critic Nouriel Roubini, who was dubbed Dr Doom after he predicted the global economic crisis in 2008 stated “blockchain is probably one of the most overhyped technologies ever, with the amount of hype vastly exceeding what are going to be the applications of it.” In a follow up interview with Forbes, he went as far as to say “It’s useless technology and will never go anywhere because of the proof of stake and scalability issues. No matter what, this is not going to become another benchmark because it is just too slow.”

Scalability and speed are concerns echoed by Daniel Newman in his article entitled Don’t believe the hype: understanding blockchain’s limits, who also adds trust and security into the mix of stumbling blocks which are dragging blockchain out of its “honeymoon phase”, as he puts it.

Downplaying growth

Meanwhile a cluster of reputable IT analysts have published reports in recent weeks which bust the myth of widespread blockchain adoption and roll-outs. Gartner’s 2018 CIO survey claimed that just one per cent of the CIOs who took part in the survey indicated any kind of blockchain adoption, with 77 per cent claiming they had no interest in the technology or any planned action to investigate or develop it. The firm also claimed that the technology is entering a “trough of disillusionment” phase as interest in blockchain “wanes”.

Backing this up, Forrester also released a report which estimates that 90 per cent of active blockchain projects will either be put on hold or abandoned altogether.

The other side of the (bit)coin

Despite all this, in the media, the balance unequivocally tips firmly in the opposite direction, as international governments, financial companies, tech firms and many others, who are eager to be seen as a cut above the rest and skyrocket their share prices, seek to promote their adoption of blockchain technologies.

Take this week alone as an example, we’ve seen:

· The World Bank launching its first blockchain bond

· Russian state pension fund announcing plans to deploy blockchain tech

· The OECD announcing a new Blockchain Policy Forum event

· China launching its new Blockchain Lab initiative

And this is merely a miniscule sample of the vast cacophony surrounding blockchain on any given week.

Just to build up the hype even further, PwC this week published its 2018 Global Blockchain Survey, which found that an astonishing 84 per cent of executive interviewed said their companies are “actively involved” with blockchain technology, research which confusingly paints a completely different picture from the studies published by the IT analysts.

Staying grounded

As with many exciting and innovative technologies, everybody wants to jump on the bandwagon and find a way to apply it and make it work for their business. Some will find that once the initial excitement recedes, a project is deemed too ambitious or that there are too many barriers rendering it difficult to get off the ground, whereas others, often with more realistic applications, will succeed and transform the way they work.

With blockchain we are at a pivotal phase whereby companies need to understand exactly how the technology can fit into their work cycle and be of benefit to them. As Varun Mayya, CEO of Avalon Labs says: “The good news is that good projects will continue to survive and authentic ones will continue to reap the benefits of both blockchain and smart contracts.” I am delighted to be part of one such organisation recognised last week by Forbes as one of the companies using blockchain technologies to help transform the publishing industry and improve education.

Re-inventing the Research Text

There’s been a sustained conversation for a while now about how tech will impact the ways research is produced, read, and propagated. With the advent of complex digital books, for example, researchers will finally be able to store the wealth of raw data and sources they collected during fieldwork, and make it immediately accessible to anyone who wants more information, instead of forcing them to go online and digging through files.

But innovations like this take the book itself (as it currently exists) for granted. Even though it doesn’t quite strike us so in our everyday lives, the book is a profoundly unnatural way of presenting information to others. It requires all the relevant information, regardless of subject, complexity, and source type, to fit within a linear text of typically a few hundred pages. A fascinating question to consider is how the reading experience could change if we were willing to alter the book’s linearity itself.

Consider for example the set of texts that proceed axiomatically, that is, by building an elaborate deductive system from a set of basic assumptions. I have in mind works like Newton’s Principia, Wittgenstein’s Tractatus, and Spinoza’s Ethics. I can’t speak for the authors, but for most of us who attempt to read these today, understanding what’s being said usually means frantically flipping back to the various theorems proven before in order to put them together in a way that makes the later theorems intelligible. The biggest hurdle to faster learning here is the linearity our current books impose on us. Smart ebooks could change this, and there are already some indications of how this can be done away with.

A PhD student at Boston College, John Bagby, created visualizations of the entirety of Spinoza’s Ethics, with each node representing a proposition.

Clicking on a node reveals its connections to other nodes, and also brings up a dialogue box which will state all the relevant propositions (the one selected, the parent and children propositions). Just like that, the linearity that was taken to be constitutive of our reading experience for centuries is shown to be a constraint, and the visualization makes the connections far easier to pursue. That isn’t to say that reading Spinoza becomes easy, but it’s undeniable that this would make the text tremendously more accessible for both beginners attempting to read it as well as for experienced researchers hunting down some obscure subtlety.

As ground-breaking as this is, an obvious drawback is that very few books lend themselves to be transformed in this particular manner. But we shouldn’t be too quick to dismiss its relevance. For far too long we’ve been asking ourselves what the next big idea will be. Perhaps it is time to acknowledge that the future isn’t about a single all-encompassing idea but many ideas, pushing in many different directions. For such a future, however, tech companies will have to stop thinking in terms of delivering a single, clear-cut solution, and instead think in terms of platforms capacious enough to allow different authors, designers, and publishers push the envelope in their own ways, on their own terms.

Subscribe!