The Future of Research: What is the Answer?

In scholarly publishing today, there is an on-going debate about the efficiency and accuracy of workflows, and the security of current publishing models. Digital publishing improved speed to publication and open access provided a simplified and democratized way of sharing research, but these technological advances also brought the threat of piracy, the ease of plagiarism, and the ability for researchers to publish directly, providing a flood of information for researchers to wade through in order to find something useful.

Eefke Smit, Director of Standards and Technology for the International Association of STM Publishers, made a statement last fall, “The STM publishing world is suffering its own set of trust issues at present. But even with its imperfections, the current system of academic publishing is strong and offers an efficient infrastructure.”

Others disagree.

Piracy and Plagiarism

In this digital world, it is easy for readers to download content for free and pass off research or ideas as their own.

The last year has seen many in the scholarly community discussing how the technology blockchain — a decentralized, digitized series of information blocks shared in a peer-to-peer network — could not only eliminate plagiarism altogether, but also provide the ability for researchers to collaborate on their work more effectively.

Blockchain features individual blocks with transaction data, timestamps, and the creator’s information, plus the information from the previous block as a unique and unalterable chain of information. Because each information block can be directly attributed to the author/creator, that makes collaboration simple, speeding up the research process immensely.

Last fall, Joris Van Rossum, Special Projects Director at Digital Science, published a report entitled “Blockchain for Research: Perspectives on a New Paradigm for Scholarly Communication” which outlines a number of ways in which scholarly publishing can benefit from the use of blockchain, both from a security and ease of rights management perspective.


As mentioned above, blockchain can also be used for the management of rights. Content blocks can be embedded with rights information and a Smart contract that allows easy sharing, licensing, and usage. For example, if a writer wants to use an image to illustrate a journal article, they can easily track down who holds the rights, find out the licensing cost, and who to contact in order to secure permissions, all in a matter of moments.

For publishers, this will increase revenue not only through automating the rights work and freeing up staff to do other high-level work, but it will also empower them to keep track of monies due, and available rights which can be exploited.


One of the struggles researchers, academics, and publishers now face is the saturation and sea of information which now exists as a result of Open Access. Making content discoverable and searchable has become one of the main challenges and concerns keeping publishers awake at night.

In recent years many of the innovations coming through in the industry have been geared towards troubleshooting in this arena. We’ve seen article-level initiatives like ORCID and Crossref come to the fore and become increasingly adopted by publishers.

Many are predicting that now that publishers have mastered metadata, SEO, and are increasingly incorporating article-level innovations, the next major step will be the adoption of AI technology. Beyond the hype and from a practical perspective it has been widely predicted that AI will have an impact on publishers’ endless quest for improved discoverability, but also by driving efficiency in the editorial workflow.

Through our product suite, PageMajik, we implement tools to improve the free-flow of information into the marketplace by easing the workflow constraints and time-consuming tasks in the publishing value chain from author to publisher to reader. By improving these systems and allowing writers and publishers easily write and publish their work, we hope to play a major role in informing the future of research.

An Antidote to the Curse of Knowledge

How workflows can help manage cognitive biases that complicate and delay work

When celebrated cognitive psychologist Steven Pinker was recently asked what he considered to be the greatest impediment to clear communication, he named the “curse of knowledge” cognitive bias. This is the phenomenon where a person who knows something finds it extremely difficult to imagine what it is like to not know it. This can lead to the knowledgeable person using jargon, providing inadequate explanations, and skipping steps in descriptions. For anyone who works with others, these problems are familiar, incredibly frustrating, and until now, seemingly inescapable.

A particularly dramatic case of this is Leonard Jacobs’ tale of his week from hell freelancing as blog manager at an unnamed financial publishing company. Starting from an incredibly vague job description that didn’t go beyond the requirement that he “manage the blog”, to not having the relevant people informed about his arrival, to different supervisors giving inconsistent feedback, before concluding with his dismissal. This was a debacle by any measure. But the question remains: how did this happen? And why do incidents like this continue to happen so frequently in the workplace?

There are two common reasons that can be proposed: people are evil or they are plain incompetent. While the tales of sadistic bosses are certainly common enough, the publishing company employees who feature in Jacobs’ tale, “Maria” and “Buehler,” hardly seem wicked. And while they do seem somewhat incompetent, it might very well have turned out that they were excellent at their own jobs. I suggest the better explanation is Pinker’s “curse of knowledge”, where communication breaks down in communities, not due to individual incompetence, but due to people’s inability to imagine how it feels for another person to not know something. After all, the person who wrote the report made up of “just three words” probably thought it made sense, but only because they could not see in the moment how much background information would actually be required for someone else to understand.

Pinker’s own advice to manage this bias is to choose words more carefully and test out messaging. The problems with solutions like these are two-fold. The first is that messaging can only work if you know who the intended audience for a message is. In large organizations this is simply not going to be possible since this might not be decided until later. Moreover, the root of this bias is that people are for the most part unaware that they are being unclear, so even if they tried choosing words more carefully, they could still continue to be totally opaque.

While no silver bullet for this problem exists, a technological solution that organizations increasingly rely on is the workflow. As Wikipedia defines it:

A workflow consists of an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes that transform materials, provide services, or process information.

To put it simply, a workflow is a way of formalizing instructions and rules to govern how the workplace functions, which allocates roles, rights, and responsibilities to the various people involved in a project. This doesn’t make people communicate better, but it brings about a situation where they don’t have to. Now, instructions won’t have to be interpreted from a few cryptic words, since they will be embedded within the system itself. This also means that people don’t have to spend time and attention trying to remember what the latest set of instructions are—they can just mechanically submit and let the pre-set instructions take over. And the use of automatically assigned templates can be used to make clear that there are expectations to be met and so certain kinds of reports—Jacobs’ three word ones, for example—will simply not do.

And the best part is that with sophisticated workflow tools, the sheer range of options available ensures that the chosen workflow doesn’t have to be any more constricting than necessary. Human behavior is never going to be as rational or as clear as we would like, but that is no reason not to seek ways to optimize and streamline things as much as possible.

Removing the Pain Points in Journal Publishing

In December, David Crotty, Editorial Director, Journals Policy for Oxford University Press, published a piece in Scholarly Kitchen lamenting the shutdown of Aperta, the workflow solution created by Public Library of Science (PLOS), giving voice to the disappointment of the research community which has had “high hopes for much-needed improvements in the manuscript submission process.”

More than a decade ago, when journals and their submission process became digitized, researchers rejoiced at the speed and ease at which their work could be published and how that would change the future of scholarly publishing.  What they had not anticipated was how unnecessarily complicated the submission process could become.

As Crotty notes, PLOS ran into trouble when working with different editorial teams.  Each publisher has their own format and style, and submissions from researchers come in a variety of formats, with new media being added to submissions all the time—from charts to photos to videos.  Publishers have their own individual workflow systems, and scientists and researchers want to publish their findings in an effort to further discovery and don’t have time to figure out each individual, often labor-intensive, process.  Plus, once you do figure out the submission process, as Phill Jones, Director of Publishing Innovation at Digital Science, notes in an article in Scholarly Kitchen, “People complain about slow upload speeds and poorly designed workflows that mean they have to babysit a submission for several hours.”  Every effort to create a uniform, efficient submission process across all publishers has been unsuccessful. 

As Jones suggests, “My advice would be for publishers to try out their submission systems themselves (under realistic conditions, with large files and multiple authors) and see how much of a pain they are to use. If you do this, you’ll probably see some easy wins.”

With Open Access and the increased use of social media, the future might see researchers electing to publish and promote directly to the research community, bypassing journal publishers altogether.  What journal publishers are realizing is that their future could be unstable if they don’t implement a change in their publishing process.

If publishers cannot agree on one uniform style guide, then what they need is a system that easily adapts to each individual publisher’s needs, while making the submission process as simple as possible for writers.  

For the last two years, the team at PageMajik has been working with large and small publishers on developing a workflow solution that deals with these very issues. 

The team has created a cloud-based system that allows each publisher to pre-set their specific requirements to adapt any submission automatically to the required format.  The system also highlights any missing elements so writers can easily add those in and complete the submission process quickly and easily.  This bespoke solution allows submissions of all types to be transformed into an easily-publishable format which will help reduce publishing gridlock on both the writer’s and publisher’s sides, and help researchers get their work out into the world more quickly.  

As digital publishing becomes more a part of our lives, eliminating the pain points for both researchers and publishers alike will help traditional journal publishers retain their position in the publishing landscape for the foreseeable future, improve research’s speed to market, and bolster the scholarly community’s ability to produce top-notch work.

Re-examining the Publishing Value Chain

For the last decade, the traditional publishing industry has been contracting.  The rise of digital publishing, self-publishing, and open access, coupled with the worldwide recession forced publishers, large and small, to conduct massive layoffs.  In order to maintain profit margins, publishers have had to publish an increasing number of books with a further dwindling workforce.

The Challenge to Traditional Publishing Models

This increasing workload often means that there are situations of inattention, including typo-filled publications, and lack of understanding of the impact of a publication on the marketplace, like the one last year with Usborne and the recall of Alex Frith’s Growing Up For Boys which lead to a controversy around objectifying women.

As these issues occur more frequently and rapid  direct-to-consumer publishing models like Kindle Direct Publishing or Lulu become more popular, traditional publishers see their role threatened.  Publishers must re-examine the value chain and focus energy and budget on the most important roles they play—in the curation, editing, and promotion of fine, informative, and entertaining books and journals. 

Embracing the Future

In order to best do that, publishers must commit to improving what is a somewhat time-consuming and outdated publishing process and embracing technology where it can be helpful in making the system more efficient, freeing up humans to focus on higher-level work. 

Publishers have often been very reticent to embrace technology due to the cost, the training time for their staff on a new system, and a lack of proven effectiveness. Yet, when publishers embraced the importance of metadata, they found that their books were catalogued better and discovered more easily.  Now that they have that in hand, it’s important for publishers to look to the next technological solution for their challenges.

Technology and digital publishing may have forced publishers to deal with a changing marketplace, but technology can also offer traditional publishers a chance to update systems that will improve workflow and efficiency and ultimately generate increased revenue.  From rights management systems to better identification of rights holdings, sales automation, and predictive technology to help with more profitable acquisitions, to name a few, technology and machine learning have helped publishers to better take control of their bottom line. 

Trusting the Machines

Specifically, the addition of machine learning into the publishing process is crucial.  As Tim O’Reilly, O’Reilly Media Founder, noted at last fall’s W3C meeting, artificial intelligence and machine learning could help publishers with their essential problem “matching up the people who know something or have a story to tell with the people who want to find them.”  Machine learning can learn and improve upon publisher formats and systems, eliminate human error, take on some of these tasks that are time-consuming, and better help analyze and understand readers’ needs. 

It is in the day-to-day publishing process that publishers most need a system that will automate redundant tasks and put all assets and project management in one integrated system, so that publishers can focus on the higher level tasks of acquiring and publishing books and journals well.  A system that helps all members of the publishing team, from author to editor to production, would allow publishers to be more efficient, let them be better able to respond quickly to trends, and digitize and update backlist more rapidly, thus allowing them to reclaim their roles in the industry.

How can New University Presses be more disruptive?

At the Researcher to Reader conference in London last week, New University Presses (NUPs) and Academic-led Publishers (ALPs) were very much the hot topics on the agenda. With as many as 19 NUPs becoming operational in the UK in recent years (including the likes of White Rose University Press, UCL, and Cardiff University Press), there is a perceptible shift taking place in academic publishing, one which aims to put academics and institutions at the centre, prioritising their needs above all else. Many believe that this trend will be the most disruptive development the industry has seen since Open Access, once again transforming the role of publishers. But how real is the threat they actually pose? And what role will technology play in this story?

Technology is very much at the heart of everything these new outfits do. They predominantly champion digital-first business models, with the production of print products across monographs, books, and journals, usually via Print on Demand, only as secondary propositions. They are Open Access advocates through and through, driven by a need to disseminate research on the largest possible scale to meet the demands of scholars. They are increasingly investing in affordable technology and service options, which can help them establish a strong infrastructure and better manage their workflows on a day-to-day basis. And they do all this at a relatively low operational cost – their goal is not to generate revenue and they tend not to have article or book processing charges.

The resource issue

While many technological innovations have dramatically reduced NUP set-up and running costs, a lack of human resource has always been, and still is, the main stumbling block barring their growth, with most NUPs operating with just one full time member of staff. As they establish themselves, many NUPs are set up out of scholarly libraries, and the running of the Press becomes one in a long list of tasks the stretched, modern-day librarian must undertake. Even when an NUP is established as a separate entity with its own dedicated resources, they typically lack adequate resources to compete with more established publishers, limiting how much research can realistically be processed, disseminated, and marketed effectively.

This resourcing issue means that while some academics will indubitably choose to publish their research via their institution’s Press, it is unlikely that an NUP in the early stages of its trajectory will be able to publish the vast majority of the work of their home universities’ academics. So, while by nature NUPs may be perceived as radically disruptive to the hegemony of traditional publishers, when you look at the metrics of volume, scale, and resources, it is unlikely that they pose a real threat to their business, at least at present.

Machine learning in the workflow

One of the main challenges NUP employees face is the need to constantly juggle tasks. Staff spend far too long on editorial procedures such as indexing and inputting metadata manually to make sure research is discoverable, and end up spending very little time promoting and marketing the work so that researchers can find it. The systems most publishers have entrenched make these processes slow and arduous, not to mention susceptible to human error.

This is what makes developments in machine learning technology, and their introduction into the publishing process, so exciting, particularly for resource starved NUPs. By introducing machine learning into the workflow, we estimate that publishers can free up around 40 per cent of the time spent on manual editorial tasks. By automating these processes, NUP staff can focus instead on adding real value where human attention is needed most – on higher level work such as promoting journals and books to ensure that they reach more eyes around the globe, and actually become the disruptive threat traditional publishing fears.

A New Workplace Ethic

How Content Management Systems can help build a more respectful work environment

An enduring myth we live by is that of the lone genius the solitary individual (usually male) who is preternaturally disposed to frequent “eureka!” moments and who possesses great personal strength, and who is the main engine of change and progress. They cast long shadows too whether it is promising physicists having to aspire to be a New Einstein, entrepreneurs constantly being positioned as the Next Bill Gates or Steve Jobs, and even political leaders being pitched as the Next Martin Luther King. There’s nothing inherently wrong with looking up to individuals, but what would a vision of success that stressed cooperation look like

Enter the Content Management System or CMS. At its most basic, a CMS is an application that allows for the creation and modification of content according to a pre-selected workflow in a multi-user environment. Consider the CMS offered by PageMajik:

Since this is meant for publishing, there are five relevant tabs manuscript (for the author), Review (for the editor), Art (for the illustrators), InDesign (for the designer), and Miscellaneous. Using a workflow chosen in advance, clear rules can be provided for how files are to be transferred and treated. For example, it can be decided that once the artwork and manuscript are uploaded, an InDesign file will automatically be created, PDF generated, and then a proofing exercise run by the author repeatedly until satisfactory. Moreover, since every file uploaded is stored on the cloud, previous versions can be looked at whenever needed, and if a newer version turns out unsatisfactory, you can just continue work on an earlier version.

Review (for the editor), Art (for the illustrators), InDesign (for the designer), and Miscellaneous. Using a workflow chosen in advance, clear rules can be provided for how files are to be transferred and treated. For example, it can be decided that once the artwork and manuscript are uploaded, an InDesign file will automatically be created, PDF generated, and then a proofing exercise run by the author repeatedly until satisfactory. Moreover, since every file uploaded is stored on the cloud, previous versions can be looked at whenever needed, and if a newer version turns out unsatisfactory, you can just continue work on an earlier version.

One reasonable objection to such a set-up is that trying to formalize a workplace using a CMS might constrain those people who do not have a fixed way of working, those who really do rely on unpredictable “eureka!” moments. This however is not a problem for sophisticated content management systems like PageMajik’s. For example, if a user wants to be unconstrained throughout, then a workflow which grants total access and management capabilities can be chosen. Alternatively, if someone wants to submit their manuscript and then be done, this is straightforward too. This way, the idiosyncrasies of individuals can be taken into account while still holding fast to the notion that every aspect of publishing deserves recognition and respect.

Review (for the editor), Art (for the illustrators), InDesign (for the designer), and Miscellaneous. Using a workflow chosen in advance, clear rules can be provided for how files are to be transferred and treated. For example, it can be decided that once the artwork and manuscript are uploaded, an InDesign file will automatically be created, PDF generated, and then a proofing exercise run by the author repeatedly until satisfactory. Moreover, since every file uploaded is stored on the cloud, previous versions can be looked at whenever needed, and if a newer version turns out unsatisfactory, you can just continue work on an earlier version.

While the benefits of Content Management Systems are substantial and largely uncontested, what goes unmentioned is that they also instantiate a moral principle: no longer is any particular individual seen as the central point around which everything revolves. Of course, it’s still the author’s book and it is still the author’s name that will appear on its cover. But the formal recognition that each book is a team effort, brought to fruition by many hands, is still a valuable change. At least in the day-to-day functioning, there will now be explicit acknowledgement that each person’s contribution is indispensable, heralding a new and more respectful workplace ethic.

The Machines are Coming

Publishing’s understandable but untenable reluctance to embrace AI

In her wildly popular TED talk, “wrongologist” Kathryn Schulz asks her audience how it feels to be wrong. Obvious responses include “dreadful”, “thumbs down”, and “embarrassing”, but Shulz points out that these are answers to a slightly different question—how it feels to realize you’re wrong. Of course, realizing you’re wrong can be devastating, but just being wrong itself doesn’t feel like that. In fact, just being wrong feels no different from being right.

Shulz’s insight here is that when we don’t know something, typically we don’t know that we don’t know. We operate quite blissfully under false assumptions about how the world is, until we come up against reality and realize in a revelatory moment that we are, in fact, mistaken. This can be relatively harmless, as in the case of finding out your keys really aren’t in your pocket. But not all cases are this benign. If you have an entire company and the livelihood of all its employees on the line, then operating under a faulty set of assumptions about the world can be catastrophic.

Consider the use of Artificial Intelligence. Data shows that at least a majority of people in almost every conceivable sector expects to switch to more AI usage in the near and mid-term:

In publishing too, there are modest signs of a systematic adoption of AI –  Companies are already employing AI to target readers in customized ways that a flesh and blood publisher cannot even dream of, enhance discoverability and targeting, and Amazon is even toying with creating books written entirely by AI. Despite this, many of the publishers I have talked to remain satisfied, or at best ambivalent, with their non-AI methods of production for the foreseeable future. In some cases, this is a completely understandable attitude brought on by prior experiences with tech companies that made claims about their capabilities that their technology couldn’t actually fulfill. But another significant reason seems to be a certain romanticization of the way things are, coupled with that all-too-human wishful thinking that any new problem can be addressed by simply assiduously adhering to the usual ways of working.

Unfortunately, to resist the inexorable approach of AI today is to be one of Shulz’s people who is wrong but doesn’t realize it, believing everything is alright even though things are already starting to shake up. As she points out, we can go some way without realizing how mistaken we are, but at some point the ground under us gives way and the truth becomes undeniable. For now, there’s still time to get in front of this trend and embrace AI on our own terms. But the window is closing.

A Weathervane for the Changing Winds in Publishing

How a new workflow management tool promises to help publishers flourish in the digital age

There’s an old publishing joke that goes like this: the first thing Johannes Gutenberg printed on his newly invented printing press was the Bible. The second was an article about the death of publishing.

While this rightly picks on the perpetual think pieces bemoaning the death of publishing, it is important to concede that one reason these “death of publishing” pieces get written up so often is simply that publishing faces new crises all the time. Like the Hydra which sprouts two heads for each one cut-off, the moment one challenge is dealt with, more spring up. This isn’t a call to throw up our hands and give up – publishers play far too important a role culturally for this to be an acceptable option – but this does mean that publishers need a partner with a constant finger on the pulse of the broader field to help them recognize and adapt to new challenges as rapidly as possible. Enter PageMajik.

We firmly believe that the future cannot be faced unless you’re well acquainted with the past and the present.The application of this philosophy gives PageMajik an edge since its team has worked in publishing for decades. This intimate knowledge of the field ensures that they don’t just know what overt trends are being observed now, but that they also know about subterranean patterns not widely recognized. For example, consider how in the 90s people in the publishing industry were convinced of the imminent death of print, with its market share increasingly eaten up by new and slick digital books. Many start-ups were launched in the hope of cornering this new market, with new devices churned out faster than you could keep track. Unfortunately, e-book sales did not take off as predicted and many of these start-ups had to shut down.

As this graph from the International Digital Publishing Forum shows, the idea to focus on alternatives to print only started to pay off in 2009. Flash-forward to 2017, and more than two-thirds of adult fiction sales is digital. That isn’t to say that e-books are enjoying uninterrupted growth – 2017 also saw a 17% drop in ebook sales last year thanks to “screen fatigue”. There isn’t an easy narrative here that can be learnt and mechanically adhered to regarding the market. Rather, what incidents like these emphasize is that having a vision isn’t enough – you also need a trustworthy hand on your shoulder to hold you back when it’s prudent and to give you a nudge when necessary. This is precisely the role PageMajik plays.

Accordingly, after extensively surveying publishers regarding their most pressing needs, we decided to create a single system that allows publishers to monitor and direct the progress of the book from start to finish. It would have a shared platform where authors, editors, and designers can all work together simultaneously, while the cloud-based content management system ensures thorough version control. And best of all, state-of-the-art automation eliminates much of the mechanical tasks that had to be done manually – speeding up the entire process significantly, resulting in a windfall saving in time and money.

As impressive as these features may be, PageMajik will never be content with just these but will always look to improve upon them, so that its partners never have to fear being left behind. To go back to the weathervane image invoked in the title, these vanes have two aspects. The first is that they reveal the direction of prevailing winds to an onlooker, and the second is that they turn themselves to orient to those winds. By loose analogy, PageMajik keeps an eye on the future and constantly transforms itself with the times, ensuring that quality never has to suffer because of new contingencies. We hope you’ll join us on this journey.

Sensitivity Readers and the C-Word

How social media outrage is changing how authors write diverse characters

When Young Adult author Laura Moriarty heard President Trump denounce Muslims en masse, she was appalled and wanted to do something. She decided to write an inspirational dystopian novel where a white teen protagonist would help resist the government’s forced internment of Muslim-Americans as a simplistic, if somewhat heavy-handed, parable for our modern times. Little did she know that she would soon be accused of insulting marginalized communities and have her book itself denounced as a “white savior narrative”. The debate over what should be allowed to be said in the public sphere, and by whom, rages more fiercely than ever.

Moriarty’s protest was not the first time outrage had targeted authors who were perceived as portraying minority communities offensively. After a blogger declared that Laurie Forest’s initially well-received book The Black Witch was “the most dangerous, offensive book I have ever read”, a massive online campaign was launched to keep the book off the shelves. Keira Drake’s The Continent was branded “retrograde” and “racist trash”, causing the book’s publication to be delayed. Mary Robinette Kowal even decided to pull a project when she was told it was “problematic”.

As a response to the increasingly frequent outrage over authorial missteps regarding characters with marginalized identities, publishers have started hiring “sensitivity readers” – people belonging to relevant marginalized groups who will review manuscripts for insensitive language and cultural misrepresentation. Their pricing starts from $250 a book and it is common for an author to hire anywhere from 12 to 20+ sensitivity readers for a single novel.

Some authors have taken this change in stride and some even think this is a positive development. Fantasy writer Kate Milford, for example, sees sensitivity readers as playing an analogous function to a history expert who provides information about historic context. Just as a Victorian scholar might be called in to ensure that a book about Victorian times does not make any major factual mistakes regarding clothing or norms, a bipolar sensitivity reader might ensure that writers who aren’t themselves bipolar do not make any erroneous or offensive choices regarding manic-depressive characters. As Milford puts it, "it's not that I can't empathize or do the imaginative work myself, but I want accuracy."

Not everyone is as sanguine about this trend, however. Novelist Joyce Carl Oates scathingly tweeted what could happen to books that are now cherished if they would have had to kowtow to the progressive demands that sensitivity readers might want to have seen:

The worry for critics like Oates is that even if the sensitivity readers are appointed for noble reasons, they risk serving as Trojan horses that sneak in censorship (the dreaded C-word!). After all, it does not seem implausible that narrow ideas about race, gender, sexuality, disability, etc., which are currently in vogue will nudge authors into producing safer, less risky, and consequently less valuable work. After all, isn’t literary progress and innovation produced by violating seemingly sacrosanct moral rules? Where would the literary cannon be without Allen Ginsberg’s Howl, Mark Twain’s Huckleberry Finn or Toni Morrison’s Beloved – all books that were deemed immoral at their time of release?Should works of merit have their wings clipped by the myopia of current moral standards?

The question then seems to be: how should one maneuver between the Scylla of furthering oppression and the Charybdis of censorship creep?

PageMajik offers a way out of this bind, an even-handed approach that recognizes that both sides are relying on powerful and compelling moral principles. By offering workflow management tools of unprecedented flexibility, it ensures that authors can fabricate writing processes specially tailored to suit their singular creative needs. Specifically, all decisions regarding when sensitivity readers will be consulted and whether their recommendations should be accepted are left to the author. This way, someone like Kate Milford can easily create an arrangement where input is gotten early and regularly, while others like Joyce Carol Oates can maintain the insularity they seek. By ensuring that all consultation is at the discretion of the writer, and more importantly, that all decision-making power resides with the author, the threat of censorship is mitigated while still ensuring that minority voices continue to be heard and harkened to.

This isn’t a perfect solution, of course – opponents of sensitivity readers will argue that the threat of censorship creep remains, while supporters will criticize how marginalized voices can easily be shut out. Still, given the set of incompatible moral demands laid on us, PageMajik’s ceasefire might very well be the best of our available options.