Solving Indexing, one step at a time

Publishing is on the verge of exciting times. The promise of relatively new technology like machine learning, artificial intelligence, and Natural Language Processing makes it incredibly tempting to speculate on the new world we’ll soon be living in, including questions about which processes can be automated and whose jobs will be taken over. (We have even done some of the speculating ourselves, here and here).

While there is certainly a time for thinking carefully about large scale changes to our industry, I do fear that thinking only in terms of large scale changes makes us focus on the wrong questions — by constantly thinking in terms of abstractions and generalities, we can inadvertently ignore and fail to value the concrete.

Consider for example, the state of indexing. As any academic will tell you, indexing can be incredibly helpful for research. By listing major topics and the page numbers they are mentioned in, it allows readers to first decide whether a certain resource is what they are looking for by giving them a taste of the topics covered as well as a rough estimate of the extent to which they are addressed. And for research, a well-designed index enables people to narrow in on precisely the topic necessary, since obviously every resource cannot be read from scratch each time a paper or a book or a website entry needs to be written. The need for the index then is very real.

In addition, few people I talked to in publishing and in the world of academia think that the current indexing procedures work. A recent popular Twitter thread by historian and editor Audra Wolfe raised many issues I have been hearing about. She tweeted that professional indexers were essential for any academic who wasn’t knowledgeable about and competent at indexing, because otherwise the result was often “frustrating and unprofessional”.

In response, historian Bodie Ashton pointed out that early career researchers simply cannot hire indexers, and that if he had paid $7 per page for his first book, the indexing fee would have been a whole order of magnitude more than what he would have earned in royalties. Historian of technology Marie Hicks weighed in too, revealing that the turnaround time required by the publisher was too short to be able to hire an indexer. Moreover, they pointed out that it simply seemed unacceptable that anyone would need thousands of dollars to be able to produce an index that was professional.

I agree. This strikes me as a situation ripe for technological intervention— an indispensable job that costs too much and takes far too much time. The biggest obstacle to incorporating technology, however, is that expectations seem to skew too far in two directions. On the one hand, tech optimists seem to think we can come up with an indexing engine that will immediately replace professional indexers, saving them both time and money. Unfortunately, the work of indexing is not simply mechanical in a way that can be captured by a simple algorithm, but instead depends on skill that takes time to develop, and quite often also expertise in the discipline that the book belongs to. Unsurprisingly then, trying to replace human indexers wholesale results in unhappiness all around. Authors report being forced to live with clearly inadequate results or else having to redo the whole job themselves.

On the other hand, some people seem to over-correct and insist that indexing cannot possibly be improved, that we simply should accept the way things are. This kind of lapse into a fatalistic pessimism is sadly understandable. For some time now, there has been a standard story about how things play out: the unrealistic expectations of some about publishing tech leads to publishing tech advertising abilities they simply cannot deliver on, leading to disappointment all around. As this keeps repeating, of course publishers start to instinctively react to tech with skepticism. But given that there are real problems that need to be addressed — as the original tweets testify — this position isn’t sustainable either.

I believe the way out of this impasse is to recognise that this is in a very real way an artificial problem. Our talk of tech in terms of abstractions and generalisations only allows us to speak of progress in terms of binary states, as entirely successful or as entirely failing. Rather than fall for this, we need to stop asking whether a certain task can be automated or be performed by AI engines, and instead ask in what ways can tech actually help us, given where we are. Once we do this, we can start noticing that there are multiple products already that can assist indexing.

Keyword extractors that already exist may not be perfect but they can certainly generate a list of suggestions that can dramatically cut back on time, since authors or indexers will only need to remove unnecessary entries, add any left out, and tweak existing ones (for example, a case of synonyms or two different people with the same name accidentally classified as the same person). Statistical information about the frequency of terms can significantly ease indexing by showing the spread of a topic through the entire manuscript. And certain categories of keywords can be extracted better than others — proper names for example are far easier to identify than key concepts. And this is by no means the end of the line. I even predict engines intelligent enough to autogenerate keywords based on the kind of reader and subject area in the coming years.

Such a plan is undeniably ambitious, and will require quite a different fundamental attitude towards tech and change. But as one scholar wistfully writes about the task of indexing, an arrangement where publishers can take care of indexing well and quickly would be ideal. This can be made real, but only one step at a time.

Blame Watson: Real AI vs. Fake AI

The phrase “Artificial Intelligence” has become ubiquitous over the last several years and we know where to place the blame — on IBM’s Watson. From predicting the weather to playing Jeopardy to diagnosing patients, Watson, and thus AI, appears to be everywhere and apparently can do anything. No longer the terror that is HAL from “2001: A Space Odyssey,” the new perception of Artificial Intelligence is that machines can and already do help humans with virtually anything.

Because of the excitement around AI and the possibilities through using this technology, many companies are blurring the lines of what AI means in order to capitalize on the recent trend with both investors and consumers. Unfortunately, much of those claims are smoke and mirrors, causing customers to buy into fake AI systems. In order to not be one of those sucked into this trap, we first must outline what AI truly is.

Artificial Intelligence implies using a combination of neural networks and machine learning that provide insight, analysis, and action without human interaction or direction. Useful and autonomous AI eliminates the need for human intervention and interaction; the machine does all the work for humans, it doesn’t just provide insights. For example, a true AI system could ingest massive amounts of data, provide analysis of said data, and take the next step to action on that analysis. Instead, what many systems and services use is “machine learning.”

Machine learning, while good, still requires human interaction to provide the structure and the continually revised set of rules the machine uses in order to “learn.” While many of these systems are very good, if a company is seeking to eliminate this work entirely from their human workforce’s to-do list, this system would not be able to do that.

So, how to tell if the system you’re considering is truly autonomous and thus worthy of the investment.

· Does it require a human to manage the system?

· Is it something that requires months of on-boarding?

· Does the system actually do the work for you or does it just provide suggestions for what you then have to do yourself?

Before you buy a system make sure that it will actually improve your workflow for the better, not add another difficult layer of work for you and your colleagues to manage. The benefit of using AI is always to improve on the speed in which work can be done, exceeding what a human can do. If your system is not providing that service, it may be time to rethink it.

2019: Year of the Workflow

Aside from the flu, dieting fads and Blue Monday, for many in the publishing industry January can only mean one thing – it’s time to implement plans and budgets for the year ahead. But as the marketing, sales, editorial, acquisitions and rights teams all bid against each other for more lines in the budget, grappling for a greater slice of the pie, how much is left in the pot for innovation, investment in technology and long-term strategic and visionary thinking?

The answer more often than not, as you might expect, is very little indeed. Decision makers in publishing have traditionally been very reluctant to prioritise investment in new technologies, replacing legacy systems and adapting workflows, sticking with the status quo as opposed to rocking the boat and causing inevitable short-term disruption and anxiety among employees.

Complete system overhauls are extremely rare in publishing, particularly in the larger houses where the scale of cost and disruption is much more prominent. This means companies are often locked into deals with suppliers for decades, leaving them lumbered with archaic solutions which haven’t necessarily adapted with the times to suit their needs. While it’s far from an ideal situation that many in the industry are still using 20th century technology on a day-to-day basis, it is unrealistic to expect publishers to take big, drastic steps in order to change things, especially during times of political and economic uncertainty.

But this doesn’t mean that publishers are turning a blind eye to technology and innovation. Last year we spoke to hundreds of business leaders across all sectors of the publishing world, many of whom were increasingly open to adapting their internal workflows in an effort to boost efficiency and stem loss of revenue.

Why workflows, you may ask? Well, one of the main issues has been that, while most publishers are producing books and journals across all formats, the workflows embedded throughout publishing companies are still primarily print-first models. This means that the processes in place for bringing ebooks, online journals and audiobooks to market are often the same for print products, which traditionally require much longer lead times. A case study by Gutenberg Technology, published in March last year, revealed the benefits of switching to synchronous print/digital or digital-first workflows, claiming that 47 per cent of time can be saved and as much as 30 percent of costs can be saved” if publishers were to adopt this modern way of working.

These are compelling statistics, which most CEOs are not taking lightly. In an industry where there is a constant struggle to keep costs down, profit margins are wafer slim and market forces are working against us, publishers can no longer afford any unnecessary wastage in their supply chains and internal workflows. Streamlining workflows and looking at how many tasks across the publishing business can be automated thanks to innovative new technologies is what industry leaders are now turning their attention to as strategy du jour.

So, while I don’t expect 2019 to be a year when publishers revolutionise the way they use technology and do business, I do believe it will be one where we take baby steps towards a smarter and more agile way of working. And technology will play a vital role in shaping the workflows publishers increasingly choose to adopt in the not so far future.

A More Efficient System: A Look Ahead at 2019

Last year on the blog, we highlighted several ways in which technology is influencing and changing content industries. From newspapers to book and journal publishing, music to fine art, technology is speeding up processes, streamlining workflow, helping with discovery, creation, and fact-checking content, and improving the way we reach customers. What we also discussed is how, in many ways, these changes will impact those working within these industries.

As we look ahead to 2019, I want to emphasize some of the key changes we can anticipate this year to help prepare for the future of publishing and align our industry better with the changes that are going on in the world around us.

Artificial Intelligence

Even though we see artificial intelligence in our day-to-day lives, there continues to be some knee-jerk wariness on the part of publishers. Because Artificial Intelligence is uncharted territory, publishers aren’t alone. In a survey of some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists conducted in the summer of 2018 by Pew Research, the experts predicted “networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities.”

Though AI on a larger, global scale should be taken in a slow, plodding way to ensure proper implementation and protection for the humans involved in each industry, on a smaller scale in publishing, AI can help improve workflow and systems, allowing for humans to engage in fewer mundane, time-consuming tasks and more high-level, creative pursuits.

Workflow Solutions and Automation

AI and machine learning are used to help automate some repetitive tasks in publishing. Last year, we ran a series on the State of Automation, which was highlighted by The Bookseller, outlining how automation is being implemented in the world around us—from retail sites to healthcare—and how it will impact the publishing industry on a granular level. Though automation is still very much in its infancy in publishing, it has the potential to be one of the more disruptive changes in the foreseeable future.

By automating many systems, some departments, such as production, editorial, and rights may have radically different workloads and responsibilities. By automating some of these systems, we could free up these departments to expand their roles into new, creative areas.

Blockchain in Publishing

Blockchain became the hot topic last year. Blockchain is a decentralized, digitized series of information blocks shared in a peer-to-peer network and the technology behind the cryptocurrency Bitcoin. For academic publishers, blockchain seems to be the most viable way to chart research, peer review, and dissemination of information.

Just last week, Dutch publishing consultant Sebastian Posth released a paper entitled “What Is Blockchain: Why and How Should the Industry Care?” comparing the shift to blockchain and cryptocurrencies as “significant as the shift that happened with the emergence of the internet.” Posth illustrates how blockchain can help publishers and other media with piracy, payment, expanding the ability to reach customers better, but blockchain will also “confront publishers with new, inherent obstacles and questions: about identity and governance; about laws and regulations; about transactions and revenue models; about crypto-currencies and currency-conversions; about crypto-economics and financial incentives; about censorship and borders – and a lot of things they might have never thought of before.”

AI in publishing production - Simply a question of “when”

Book publishing has always been an industry of tight margins. Particularly in the world of the printed book, publishers have always found themselves at the mercy of overheads which are well beyond their control. From paper and ink costs to fluctuating global currencies and transportation outlays, publishers’ profits have traditionally ebbed and flowed based on external factors, and we haven’t even mentioned challenges with the retailing environment and evolving reading habits.

This was a major concern among many of the C-level executives I met with at the Frankfurt Book Fair in October, and then more recently at other conferences in the US and UK. On the one hand they were happy and defiant that the printed book had apparently held strong in the midst of challenging trading conditions while having fought off a range of disruptive elements in the marketplace, but on the other they were showing a growing concern about rising costs associated with physical product, and evidently feeling the pinch.

Lean and mean

Books are an expensive business. However, most of the people I spoke to at Frankfurt were insistent that passing these increasing costs onto the consumer was not an option they were willing to explore. Yet they were extremely keen to hear about ways in which technology trends such as AI could enable them to streamline efficiencies across the business to reduce operational expenditure.

More than ever, directors of publishing houses are looking for ways to make their organisations leaner and meaner, and to ensure that they’re not overspending and overstaffing, in an effort to recoup some of these spiralling overheads. And many are becoming increasingly aware of the fact that the new wave of technologies being made available will help them to do exactly that.

How soon is now?

Several months ago, on this blog, we explored how automation is likely to impact various different roles within publishing. We concluded that where technologies such as AI would likely have the most significant short-term impact would be within the production and editorial departments and that we can expect publishers to start rolling out AI-based technologies into their workflows within the next two years or so.

Some in the industry were quick to dismiss this prediction, stating that they just couldn’t see publishers implementing AI-driven technologies, in any part of the business, in the immediate future. However, judging from these exchanges in Frankfurt I am now more convinced than ever that leaders are already prepared to take a long, hard look at how technology can help them to optimise their business processes.

First past the post

Earlier in the year we suggested that by introducing machine learning into the publishing workflow, particularly across pre-production and editorial departments, publishers can free up around 40 per cent of the time that is spent on manual tasks. In my mind this is a conservative figure, especially when you consider how much production resource is put into formatting, layout, typesetting and proofing - all highly automatable tasks which machine learning-driven technology can undertake.

The technology is there, and the business case has been identified. Now that publishing leaders are starting to really take a keen interest in how this technology works and how it can be applied across their organisations to boost efficiencies, make savings and drive revenue, it’s not a question of “if” but “when” and “how far” they want to go with it. Either way, the production department will certainly be the first to witness AI in action, and the benefits of this transition will be immediately felt all around the business.

Last Chance to Participate in Our Workflow Survey

We have partnered with the Book Industry Study Group (BISG) on a survey of publishing professionals to tell us where they struggle for time in their daily work lives — what work takes up the most time? what could you focus on if you had unlimited hours? how do you see the future of your role in publishing?

In talking to publishing professionals about their jobs, we hope to better understand where the industry is going and how we can provide solutions for challenges we face in our daily work lives.

We are closing the survey at the end of the year and we would love to hear from you. Please go here to tell us what your challenges are and how we can help you.

Discovery, Efficiency, and Better Research Tools: How PageMajik Can Work for Libraries

Open access and the recession have changed the landscape of library budgets and usage over the last 10 years. Library book and journal budgets have decreased; huge volumes of open access content exist, but there is no quality control or easy way to discover research; and the rise of new university presses publishing monographs, conference proceedings, and other content are trying to do so with the same staff and on a shoestring budget.

Last month, The Charleston Conference gathered together librarians, publishers, electronic resource managers, consultants, and vendors to discuss issues such as these and others, to chart a way forward, and to bring together companies who are working in that space to share some services that might be helpful to libraries as their roles continue to change.

The Charleston Premiers portion of the conference in which publishers and vendors showcase their newest and most forward-thinking products that may not be well known to the audience as a whole. The audience then votes to select their favourites in a variety of categories. We were pleased to have PageMajik selected as “Most Innovative Product” by the audience.

“For several years now the Charleston Premiers, which previews new and noteworthy products and innovations on the marketplace, has been gaining popularity at the Charleston Conference, particularly due to its fun, quick-fire pitching format and audience interaction,” said Anthony Watkinson, Director of the Charleston Conference. “This year delegates to the conference were particularly impressed by PageMajik’s pioneering approach towards improving publishing workflows and its innovative application of new tech such as AI, and I’d like to congratulate the company on winning our Most Innovative Product.”

PageMajik was developed out of our 40 years of experience working with publishers and libraries to understand the challenges that come with reduced budgets, small staffs, and vast amounts of information to sift through via open access.

What we discovered at the Charleston Conference was that there are many ways PageMajik can be useful to libraries. Most specifically, as libraries enter into the publishing side of the industry, using machine learning to tackle repetitive, time-consuming, expensive aspects of the publishing process, allows libraries and new university presses to free up 40% of the time spent on manual editorial and production tasks to focus on higher level work. Another, more traditional use of PageMajik is through the automatic meta-data tagging and analysis the system provides and which offers vastly improved discovery in the sea of content, cutting research time in half and making those research results more fruitful.

The team at PageMajik prides itself on its innovative approach to radical improvement, increased speed and cost reduction within the editorial workflow. As we work with libraries more, we are eager to find other ways we can help improve their processes. For more information or to tell us your particular challenges, please go to www.pagemajik.com.

No Winter of Discontent in Newsrooms

As the days grow shorter and the nights grow longer, it’s beginning to feel a lot like winter. But, will this cold season mean cold feet when it comes to AI investment and roll outs, as some are predicting — or in other words will this be another “AI Winter”?

There is no denying that there is, still, a lot of hype around AI. And with this hype comes inevitable disillusionment when some of the bold statements, commitments and trials don’t pan out as expected.

Many industries and companies experience ‘AI fails’ when projects aren’t properly planned out, are rushed through, are done for the wrong reasons, are not scalable, or are not supported by the correct infrastructures. Recently, for example, the automotive industry was dealt a blow when deep learning powered self-driving car experiments didn’t go to plan, setting progress back years.

Peaks and troughs

These peaks and troughs of enthusiasm and disappointment are characteristics of pretty much every major technological disruption in history, and part and parcel of the hype cycle, a concept famously created by IT analysts Gartner, whose basic graphical illustration helps to explain this phenomenon.

Some industries, and some companies operating within them, are further along the AI hype cycle than others. Arguably book publishing is at the very beginning of this process, so yet to experience a “peak of inflated expectation” let alone a “trough of disillusionment” or “AI Winter”, for that matter.

Early adopting cousins

Interestingly, one of the most advanced and progressive industries for innovative AI applications is the newspaper and magazine publishing industry. Our cousins have been experimenting and rolling out machine learning initiatives since 2013 when the Associated Press became an early adopter, automating formulaic business and sports reporting.

Two years later the New York Times implemented an AI project called Editor to help journalists reduce labour-intensive tasks such as research and fact-checking. In 2016, the Washington Post trialled “robot journalism” at the Rio Olympics using Heliograf software, which analysed data and produced news stories. And last year Reuters launched its News Tracer product, which uses machine learning to sift through social media outlets for legit breaking news. Finally, just a few days ago, Quartz announced the launch of the Quartz AI Studio, a new tool to help journalists around the world use machine learning to report their stories.

Forced hands

There are good reasons why newsrooms in particular have been so quick to innovate and experiment with AI, arguably reaching the “Plateau of Productivity” on Gartner’s hype cycle long before others. The tumultuous, cash-strapped sector has faced severe disruption in the form of migration to digital, changing consumer purchasing and reading habits, and a complete shake-up of the traditional business and revenue models which had existed for years (so not too dissimilar from the evolution of book publishing, but at breakneck speed). Pew Research reported that in the space of just 10 years newsroom employment at US newspapers dropped by nearly a quarter. There has never been more pressure on editorial teams to work more efficiently and deliver more with less resources.

In the face of such extreme circumstances and weakening financial conditions for media publishers, AI is clearly seen as a knight in shining armour, helping newsrooms to work harder, faster and smarter. And it just so happens that journalism, not traditionally seen as a hotbed of innovation, is the perfect testing ground for AI projects.

Lessons to learn

So, what can the book publishing industry learn from its cousins and their early adoption of AI technologies, given that we potentially have the benefit of a slower curve of disruption? If we look at where AI is being introduced in newsrooms, we can see most of the implementations are launched to boost efficiencies. Not necessarily to replace journalists on any meaningful scale, but to assist them in their roles, and take care of the more mundane and repetitive aspects of their roles, so they can focus on bigger and better things.

As Uber, Tesla and others within the automotive industry are learning, ambitious AI and machine learning projects can be incredibly risk averse and long, frustrating processes. Yet, as many newsrooms can now attest, workflow-based AI projects, which are innovative while scalable, useful and well-grounded can be incredibly effective and make all the difference. It’s realistic that the book industry will start to see AI applications rolling out over the next few years, and judging from the experiences of our cousins, these AI rollouts will be most successful when embedded in our workflows.

Is it time to open up Peer Review?

Peer review is arguably the keystone of academic publishing, with reviewers serving as gate keepers of legitimacy and tasked with ensuring that standards are maintained and trust in the field is sustained. It is also, for the most part, a thankless job. This might be about to change.

The practice of peer review probably kicked off when Henry Oldenburg, Secretary of the Royal Society in the mid-17th century, sent out manuscripts to experts for vetting before publication. Since then, peer review has gotten more institutionalized, but the form itself has been remarkably stable: an editor sends out a manuscript to a handful of experts in the relevant sub-domain, and if the experts green-light it as a valuable academic contribution, it is published.

Anonymity is crucial to how this system works. The peer reviewer is unnamed so that they can offer honest evaluations about the quality of a submission without fear of retaliation, particularly if the author is someone with clout. The identities of authors are also kept anonymous, so reviewer judgments aren’t influenced by personal relationships, both warm and cold. Of course, academics present their work at conferences and some sub-fields are so small that everyone knows what everyone else is working on, but for the most part secrecy is taken seriously and respected for what it makes possible.


As of late, an increasing chorus of scholars are questioning whether this accepted wisdom about the importance of anonymity in peer review is actually as wise as it might initially appear. For example, Caroline Schaffalitzky de Muckadell and Esben Nedenskov Petersen argue both that papers accepted for publication should be published along with their peer reviews, and that the reviews published “should include not only reviews from the journal accepting the paper, but also previous reviews which resulted in rejections from other journals”.

It should be conceded that this is a fascinating proposal for a number of reasons because earlier peer reviews and corrections are also vital parts of how academia works, and to simply sweep them under the rug makes this whole aspect opaque and obscure. Plus, peer reviewing is notoriously arbitrary, as is captured well by the meme of the capriciously cruel “#Reviewer2”. John Turri from the University of Waterloo who argues that maintaining anonymity keeps academic disciplines from developing open norms about what is worth publishing, frequently leading to frustration when a submission is rejected on seemingly arbitrary grounds. If we want honest discussions about the state of a field, we need to be able to see what standards are actually being employed to determine what gets published in its journals.

As great as these arguments are, a worry is that the persistence of peer review anonymity possibly undercuts all the possible benefits they advocate for. For example, one scenario de Muckadell and Petersen are keen to avoid is that of unqualified or abusive reviewing. According to this, people know their reviews are going to be made public, so they’ll take care to not review abusively. But if reviewers know they’ll remain anonymous regardless of how abusive their reviews are, it isn’t clear why they would be motivated to change their behaviour. So although the authors might still succeed in their aim to “put forward [reviewers’] arguments for public scrutiny”, this might not be quite enough to elevate the quality of reviews or decrease the incidence of abuse.

Considerations like these have led to a more radical proposal — remove peer review anonymity.


At first encounter, this might seem like a dangerous suggestion. After all, how can early career academics, especially those with vulnerable employment, critique honestly and even harshly, when it is called for? But if we look past this first reaction, two strong arguments can be found for this position.

For one, there finally will be motivation for people to write more responsibly, since they can claim credit for well-written and well-argued review but obviously cannot for abusive ones. While it is true that a reviewer who has no interest at all in using peer review history as academic credentials might still continue being abusive, this should help better things significantly.

Second, it should be pointed out that peer review still is academic work, even essential academic work. As Justin Weinberg from the University of South Carolina points out, “my sense is that the credit one gets for peer reviewing is disproportionately small compared to how important peer reviewing is for the academic enterprise”. To give people the ability to take credit for work well-done then is a matter of fairness.

An interesting model for how this might be done, without stripping anonymity from reviewers without their permission is the website Publons. This collects information on a voluntary basis from peer reviewers and verifies this with the journal publisher. This allows for the creation of reviewer profiles that each reviewer can claim claim credit for and add to their CVs.

With any solution there are those skeptical. David Roy Smith from Western admitted that as an early career academic, he simply hadn’t had the opportunity to review many papers, especially from prestigious journals, and so he wasn’t all that eager to sign-up. In addition, there’s the perpetually relevant question about whether the endless march to quantifying and comparing work done and its impact is actually good for academia.

Still, the removal of anonymity in peer review, voluntary for now, seems to be the direction we’re travelling and so we need to take it seriously. Universities and funding organizations need to incorporate the now-public data about peer reviews performed into their decision making, and choose whether people without publicly recorded reviews will be penalized or not. Publishing and publishing tech need to incorporate the ability to transfer and approve finished peer reviewers quickly to standard sites, so workflows don’t get cluttered.

The opening up of Peer Review is bound to be a momentous transformation in what is now a procedure that hasn’t changed much in centuries, so who knows where we’ll end up.

A Survey on Workflow and Automation

Over the last year since PageMajik launched, we have spoken to hundreds of publishers about their workflow challenges learning about: how much time they spend on repetitive tasks, how this impacts on time management, and what are the main barriers preventing them from launching a product into the marketplace in a timely fashion.  What we have found in our conversations is that, more often than not, there are old, legacy systems in place that are greatly hindering the efficiency and potential revenue for publishers. And, when new modules are created, these are based on old technology and don't adapt to new innovations that are being utilized in other industries.

With that in mind, we are delighted to announce that we have partnered with the Book Industry Study Group (BISG) to expand our conversations to a larger scale to gain an even better understanding of the challenges publishers face and how these business critical workflow issues can be resolved.  BISG and PageMajik have put together this survey for publishers of all sizes to identify trouble areas in the workflow, to highlight where technology might be vital, to gauge attitudes towards automation and to reveal how publishers feel automation might be of benefit in their role.

To participate, please click here and share your experiences.