How to keep publishing tech from “Locking-in” academics

cholarly publishing has recently been beset by fears that large publishing companies are creating end-to-end publishing platforms that would unfairly create dependence and entrench monopolies, by providing services to academics which become the standard. Smaller publishers unable to afford the kinds of acquisition of the behemoths will simply be powerless to compete with the ease and efficiency of, say, the submission system for both academics and publishers. As more of the market is captured by the large publishers, they then get more power over the terms of contracts, prices, etc.

The presence of different-sized competitors in a marketplace always raises concerns about the sustainability of the smaller players. After all, their bigger counterparts have more resources to pour into R&D to create more efficient tech and processes, thus giving them more of a market advantage, helping them get even bigger. In cycles which can be considered virtuous or vicious, depending on your standpoint, bigger organizations always threaten the tenability of the smaller ones. The reason we can’t just resign ourselves to this dynamic is that the monopolization of the field by a smaller and smaller number of competitors allows the survivors to dominate the field, allowing them to effectively unilaterally fix the terms of contracts everyone else is forced to abide by.

The usual response to domination by large publishers is to advocate for open science, but Open Access can’t really help here because OA publishers need to be competitive as much as anyone else. If the most effective tools are found exclusively in journals by large for-profit publishers, then even researchers who might otherwise be sympathetic to OA initiatives might opt to publish elsewhere.

But unlike the standard case of “Big Deal” packaging by large publishers, there is one crucial difference in the case of workflow management solutions — competition is possible. Unlike access to journal articles which publishers control, there are plenty of tech companies already dedicated to producing solutions for publishers. Instead of naively hoping that large publishers abstain from engaging in an arms race over tech, we can slightly less naively hope to maintain best-practice guidelines that tech companies are required to abide by. These could include:

  1. Keeping your entire system as modular as possible, so the system architecture can’t by itself straitjacket libraries into an all-or-nothing choice.
  2. Proprietary file formats are kept to a minimum to allow hassle-free disengagement from the system if required.
  3. Different bands of pricing options to ensure small and medium-sized publishers can access atleast a bare bones system.

Why would publishing tech agree to this instead? For one, and most idealistically, most people in publishing tech are themselves invested in the health of academia. But for the few who might need a little motivation, the scholarly community at large in the age of social media can make clear what tech it will be willing to work with. Soft pressure and shaming, especially in the age of social media might just be enough for such an ambitious endeavour.

Of course, we can’t be sure such a tactic would even be feasible or effective, particularly given that larger publishers can simply acquire tech companies. But given where we are today, ensuring that publishing tech is willing to help resist publishing monopolies might very well be our best shot at keeping the marketplace competitive. This isn’t going to make resource and size disparities disappear, but it just might ensure that everyone plays by the same rules.

Humans are Afraid of AI, but Why?

In this blog, we write a lot about the future of publishing with the introduction of machine learning and artificial intelligence to help automate repetitive tasks and make workflow more efficient. We also highlight that there is still resistance in the industry, and the world at large, for embracing technology due to fears about machines taking human jobs, but what is at the heart of that fear? And should we give into it by regulating how much we implement machine learning into our workflows?

In a recent article in Fast Company on the need for AI, writer Robert Safian shared a colleague’s mantra, “Everything in an organization that can be done by machines should be done by machines — efficiency dictates it. But everything that needs to be done by humans must be done by humans. The defining characteristics of an enterprise — those involving ethics, judgment, creativity, and compassion — require a human touch.” Using as an example of an instance in which a human touch was needed, the article highlighted the recent decision by Nike to feature controversial NFL star Colin Kaepernick in its “Just do it” campaign. At the center of debate that has extended beyond the National Football League and its fans into the very center of US power — The White House, Kaepernick, on paper, does not seem like a logical choice for a spokesperson. A machine would never have selected him from a list of choices. But, as a representative of what Nike stands for “to bring inspiration and innovation to every athlete,” Kaepernick, who stood up for something he believed in and sacrificed his career, fit perfectly. Only a human could see the potential and only a human could have made that decision.

A Pew Research study conducted last autumn showed that 72% of respondents are worried about a future in which machines are able to do many jobs currently held by humans. The study went on to outline how humans want to place restrictions on how and when and how much machines are involved in an organization, “in the event that robots and computers become capable of doing many human jobs, for example, 85% of Americans are in favor of limiting machines to performing primarily those jobs that are dangerous or unhealthy for humans.”  In addition, respondents were in favor of putting restrictions on how many jobs a company could replace with machines, still giving jobs to humans even if a machine are capable of doing them faster, or providing guaranteed pay for humans, even if a machine was doing the work. 

It’s clear from these results that humans are concerned about machines coming into the workplace because they will take their jobs, but we are ignoring the second part of that concern. Humans are afraid of adapting. Whether that means adapting from a system they are comfortable with or to a world in which they must become more creative, focus on the bigger picture, which may require more focused thinking and energy, is unclear. Machines offer the opportunity to stop doing mundane tasks and embrace more creative, thoughtful pursuits and ideas. Why are humans afraid of that? We’d love to hear your thoughts on the issue.

A Week in the Life of Blockchain

It’s hard to ignore the omnipresent buzz surrounding Blockchain. It is mentioned in every media outlet we consume, it infiltrates seminars and conferences we attend for work, it’s a constant on everybody’s social media feeds and it pops up in conversation all too often.

And as the noise around blockchain increases to almost deafening levels, so too does the polarity between blockchain’s opposing factions, with evangelists and naysayers alike shouting ever louder. If we take a look at just a small selection of articles which have appeared in the media over the last few weeks alone we can observe how a remarkably conflicted and jarring landscape is starting to develop.

Blockchain is “useless”

This week at the Blockshow Conference in Las Vegas, economist and renowned crypto-critic Nouriel Roubini, who was dubbed Dr Doom after he predicted the global economic crisis in 2008 stated “blockchain is probably one of the most overhyped technologies ever, with the amount of hype vastly exceeding what are going to be the applications of it.” In a follow up interview with Forbes, he went as far as to say “It’s useless technology and will never go anywhere because of the proof of stake and scalability issues. No matter what, this is not going to become another benchmark because it is just too slow.”

Scalability and speed are concerns echoed by Daniel Newman in his article entitled Don’t believe the hype: understanding blockchain’s limits, who also adds trust and security into the mix of stumbling blocks which are dragging blockchain out of its “honeymoon phase”, as he puts it.

Downplaying growth

Meanwhile a cluster of reputable IT analysts have published reports in recent weeks which bust the myth of widespread blockchain adoption and roll-outs. Gartner’s 2018 CIO survey claimed that just one per cent of the CIOs who took part in the survey indicated any kind of blockchain adoption, with 77 per cent claiming they had no interest in the technology or any planned action to investigate or develop it. The firm also claimed that the technology is entering a “trough of disillusionment” phase as interest in blockchain “wanes”.

Backing this up, Forrester also released a report which estimates that 90 per cent of active blockchain projects will either be put on hold or abandoned altogether.

The other side of the (bit)coin

Despite all this, in the media, the balance unequivocally tips firmly in the opposite direction, as international governments, financial companies, tech firms and many others, who are eager to be seen as a cut above the rest and skyrocket their share prices, seek to promote their adoption of blockchain technologies.

Take this week alone as an example, we’ve seen:

· The World Bank launching its first blockchain bond

· Russian state pension fund announcing plans to deploy blockchain tech

· The OECD announcing a new Blockchain Policy Forum event

· China launching its new Blockchain Lab initiative

And this is merely a miniscule sample of the vast cacophony surrounding blockchain on any given week.

Just to build up the hype even further, PwC this week published its 2018 Global Blockchain Survey, which found that an astonishing 84 per cent of executive interviewed said their companies are “actively involved” with blockchain technology, research which confusingly paints a completely different picture from the studies published by the IT analysts.

Staying grounded

As with many exciting and innovative technologies, everybody wants to jump on the bandwagon and find a way to apply it and make it work for their business. Some will find that once the initial excitement recedes, a project is deemed too ambitious or that there are too many barriers rendering it difficult to get off the ground, whereas others, often with more realistic applications, will succeed and transform the way they work.

With blockchain we are at a pivotal phase whereby companies need to understand exactly how the technology can fit into their work cycle and be of benefit to them. As Varun Mayya, CEO of Avalon Labs says: “The good news is that good projects will continue to survive and authentic ones will continue to reap the benefits of both blockchain and smart contracts.” I am delighted to be part of one such organisation recognised last week by Forbes as one of the companies using blockchain technologies to help transform the publishing industry and improve education.

Re-inventing the Research Text

There’s been a sustained conversation for a while now about how tech will impact the ways research is produced, read, and propagated. With the advent of complex digital books, for example, researchers will finally be able to store the wealth of raw data and sources they collected during fieldwork, and make it immediately accessible to anyone who wants more information, instead of forcing them to go online and digging through files.

But innovations like this take the book itself (as it currently exists) for granted. Even though it doesn’t quite strike us so in our everyday lives, the book is a profoundly unnatural way of presenting information to others. It requires all the relevant information, regardless of subject, complexity, and source type, to fit within a linear text of typically a few hundred pages. A fascinating question to consider is how the reading experience could change if we were willing to alter the book’s linearity itself.

Consider for example the set of texts that proceed axiomatically, that is, by building an elaborate deductive system from a set of basic assumptions. I have in mind works like Newton’s Principia, Wittgenstein’s Tractatus, and Spinoza’s Ethics. I can’t speak for the authors, but for most of us who attempt to read these today, understanding what’s being said usually means frantically flipping back to the various theorems proven before in order to put them together in a way that makes the later theorems intelligible. The biggest hurdle to faster learning here is the linearity our current books impose on us. Smart ebooks could change this, and there are already some indications of how this can be done away with.

A PhD student at Boston College, John Bagby, created visualizations of the entirety of Spinoza’s Ethics, with each node representing a proposition.

Clicking on a node reveals its connections to other nodes, and also brings up a dialogue box which will state all the relevant propositions (the one selected, the parent and children propositions). Just like that, the linearity that was taken to be constitutive of our reading experience for centuries is shown to be a constraint, and the visualization makes the connections far easier to pursue. That isn’t to say that reading Spinoza becomes easy, but it’s undeniable that this would make the text tremendously more accessible for both beginners attempting to read it as well as for experienced researchers hunting down some obscure subtlety.

As ground-breaking as this is, an obvious drawback is that very few books lend themselves to be transformed in this particular manner. But we shouldn’t be too quick to dismiss its relevance. For far too long we’ve been asking ourselves what the next big idea will be. Perhaps it is time to acknowledge that the future isn’t about a single all-encompassing idea but many ideas, pushing in many different directions. For such a future, however, tech companies will have to stop thinking in terms of delivering a single, clear-cut solution, and instead think in terms of platforms capacious enough to allow different authors, designers, and publishers push the envelope in their own ways, on their own terms.

Moving away from the "stupid" e-book: An opinionated survey of our options

Earlier this year, Hachette CEO Arnaud Nourry’s remarked that the ebook was a stupid product, since "it is exactly the same as print, except it’s electronic. There is no creativity, no enhancement, no real digital experience." While shocking in its honesty, it also prompts the obvious question: what would a non-stupid ebook look like?

When contemplating how technology can alter the future, there are two risks to look out for. The first is the false positive, where we fantasize recklessly about tech which actually isn’t the revolutionary game-changer it is imagined to be. The second is the false negative, where we are insufficiently sensitive to the potential of something before us. And that’s not even taking seriously the role of sheer luck in making or breaking a product. Still, speculate we must, and so we might as well do it with full self-awareness about the risks undertaken. So what could the next wave of ebooks consist in?

Custom Books

One obvious-seeming answer is to point to personalization. While we might even one day have tech capabilities for this, I’m still quite skeptical about how popular this would be.

For one, we already have some idea of what personalization could look like. Companies already provide services where they insert names into fixed slots in books, allowing you, or anyone you choose, to be the protagonist of the story. An intriguing idea, but also one that strikes me as a gimmick which anyone would tire of fast. Admittedly there’s some more space for children’s books to innovate in this regard, for example how “Put me in the story” incorporates photos of kids in the books they read, but again I’m not sure if the trend can outlast the novelty factor.

Interactive books which draw from video games, where the reader has to choose how the plot proceeds and what the character should do, will also be possible. But we already have video games, and plus if I wanted to “do things”, I would just go outside. Unless books can somehow deliver on adventure that the cutting edge video gaming industry cannot, this sort of personalization will be unlikely to gain much purchase in the market either.

Perhaps the most radical possibility is that of books custom written for an individual based on interests and favorite genres. With the wealth of information about ourselves we store online, anyone brave enough to give access to a publisher might be able to get a book version specifically written for them! I can conceive of this taking off, but even here I suspect all might not be well. A large part of the book reading experience apart from the actual reading consists in listening to others talk about it, talking about it online and in person with friends, reviewing it and reading the reviews of others, and above all arguing over minute details with others who love/hate it just as passionately. In other words, there are social aspects and rituals predicated on all of us reading the same book, which would be lost if all of us were reading different versions. So even if this kind of personalization were possible, our shared culture of reading might have to change considerably, and not necessarily in a positive way.

Interactive Books

A far more promising approach is the incorporation of multi-media in books, that can include audio, video, gifs, maps, AR, and VR. The application in travel guides and books on far away places is obvious, and I can’t wait to use books that let me access how various locations actually look before booking a vacation, or perhaps even more importantly, to give a sense of distant places to those who aren’t able to make it there just yet. And children, who’ve shown themselves quite susceptible to the charms of youtube, will probably be delighted at having their dull school exercise books being guided by Dora the explorer (or someone else less likely to violate copyright).

An unexplored avenue for multimedia is how other genres might find surprising potential. In high fantasy, for example, it is common for maps to be provided at the beginning of the book, and have characters traverse it during the story. To be able to explore these maps immersively while reading, to get a sense of how the journey proceeds, could enhance the experience significantly (and I might have spent far less time flipping back and squinting at Tolkien maps as a teenager).

The desire for a multi-media experience isn’t restricted to children, of course. When the distinguished philosopher G. A. Cohen delivered his Valedictory Lecture at the age of 67, he sent his colleagues a CD recording along with the text of the lecture itself, with a note saying, “please don’t read the text except when listening to the CD, because the text is much less funny unspoken.” And who knows what other applications might be found?

As promising as these enhancements are, some caution is in order. Ever since Our Choice, Al Gore’s “first feature-length interactive book” from 2001, there have been predictions about the rise of the interactive book, and these have failed to materialize. What this shows us, I think, is that while there is definitely space for enhancing the reading experience, I don’t think readers necessarily want the core experience itself transformed. As fun as map immersion would be, when it comes to the reading itself I still want uninterrupted text, with the enhancements brought up only when desired, and typically desired rarely. For all the talk about change, I can’t really imagine giving up the experience of sustained reading itself.

The fully interactive text then looks like a false positive, something that seems like an obvious game changer, which instead fizzled out. The ability to rotate a windmill in Our Choice by just blowing on the screen, while a cool party trick, has very little use for readers. And having videos disrupt reading is distracting, especially after the novelty wears off.

But what I wouldn’t dream of suggesting is that the eBook, as it is, is the insurmountable pinnacle of innovation. Nourry is right, the current ebook really is stupid! But at least part of the reason for this languishing is that we’ve been a little too taken with tech capabilities instead of asking whether readers would actually find their experience made better over the long-term. What publishing needs is a tech philosophy which doesn’t allow current reader preferences limit change, but also one which pays attention to where readers actually are with regard to their habits and needs. Luckily, the now burgeoning industry of publishing-specific tech might mean we could have a truly smart eBook sooner than anyone might suspect.

Why Rights and Licensing Automation is Essential to a Publisher's Bottom Line

An Interview with Jane Tappuni, General Manager, IPR License

The rights department is not an area in which publishers tend to invest, and yet, it’s one of the key areas of the industry with untapped revenue opportunities. With most rights deals still handled via paper contracts and one-to-one communication between editors and rights holders, it can be a slow process. Furthermore, it’s hard for publishers to have an accurate accounting of what rights they hold (and sometimes when a license runs out or rights revert to another party), how to monetize those rights against current market trends, and even more difficult to generate a quick deal in order to free up time for more complicated rights deals that may require more thoughtful consideration.

Enter technology. In a blog post earlier this year, we discussed rights deals, smart contracts, and illustrated how we thought they might be useful to publishers; “For publishers, the world of contracts unfortunately continues to be predominantly ruled by paper, creating a lag in transactional payment and royalty collection. But, that doesn’t have to be the case going forward.”

By automating systems in the rights department, using tools which generate smart contracts that can be resolved and signed in a matter of moments, a publisher can not only increase their revenue but also have a better understanding of the marketplace to make better acquisitions in the future. So, why are publishers so hesitant to adopt technology into the rights department?

Jane Tappuni, an expert on the frontlines of the rights and licensing industry, and General Manager, IPR License, deals with publishers and rights every day. As a platform built to discover, buy, and sell international rights online, IPR License deals daily with the challenges publishers face in this brave new technological world. We asked her to weigh in on how technology can help publishers…or not.

Jane Tappuni, IPR License

PageMajik: How will smart contracts help publishers?

Jane Tappuni: The smart contract can be built onto the blockchain and allow for the IP to be transacted or in simple terms for the creator to make money. Smart contracts help you exchange something of value in a transparent, conflict-free way while avoiding the services of a middleman. In publishing this could mean a better way to transact rights by taking the information out of the publishing organizations and into a blockchain with smart contracts attached that allow for the rights sale to take place.

PageMajik: Do you see smart contracts significantly changing the way publishers handle rights and licensing in the future or will it be a slow adoption over many years in particular sectors of the industry?

Tappuni: Yes I think there is an opportunity to change and improve the way rights and licensing is handled via a blockchain and smart contract solution. This is a massive behavioural shift from using internal, siloed systems into a shared verifiable database of sorts. This change in behaviour could take a long time.

PageMajik: When you work with publishers, what have been their biggest concerns about adopting technological improvements in their business?

Tappuni: Their biggest concern is value for money, return on investment is always the number one concern.

PageMajik: Do you see any downside to publishers relying on technology to help improve their business?

Tappuni: Not as long as publishers choose the right technology tools for the problem they want to solve. All too often organizations implement new software to repeat the processes they already have in place. New technology implementations are a good time to really think about process improvement.

PageMajik: With the adoption of smart contracts to secure rights transactions and track royalties, providing more revenue for publishers and freeing up staff to focus on other work, how do you see the international rights and licensing industry changing? Will there be additional challenges to overcome?

Tappuni: I see this as a possible solution to a huge problem of rights tracking. At the moment publishers use a variety of rights solutions to store their rights data some good and some not so good. This would take the rights storage data out of the silo publishing systems owned by IT and into a secure, accessible arena. The day-to-day role of a rights professional would not change as they would still be performing a rights sales role but using a global blockchain solution as a positive tool to give rights ownership data.

Jane Tappuni has more than 20 years of publishing experience and is currently the General Manager (consulting) at IPR License, a place to discover and buy international rights and permissions online. IPR License is owned by The Frankfurter Buchmesse, Copyright Clearance Center and the China South Publishing & Media Group. Jane is a specialist in publishing technology, with a focus on transactional IP management and solutions and also a graduate of the Oxford University Said Business School Blockchain Strategy Programme.

Is the Science behind AI just Alchemy?

In Primo Levi’s celebrated short story collection The Periodic Table, the story titled “Chromium” illustrates how our collective ways of behaving incorporate procedures whose justification no longer apply over time. For example, when he worked in a paint manufacturing company, he found that a certain batch of paint had turned solid due to an accidental excess of chromium oxide. In response, he added ammonium chloride to the paint to make it liquid again, and recommended to continue doing so until that batch was used up. He then left his job, but when he returned 10 years later, he found that people were still adding ammonium chloride despite the bad batch having long been replaced: "And so my ammonium chloride, by now completely useless and probably a bit harmful, is religiously ground into the chromate anti-rust paint on the shore of that lake, and nobody knows why anymore."

According to AI researcher Ali Rahimi, something analogous is happening in the field of AI research today. Last December, he argued that the use of machine-learning algorithms had become a form of alchemy since the researchers developing and using them don’t know why their algorithms work and why they don’t.

Algorithms are tweaked and tested using trial and error to generate success against benchmarks, but it really isn’t possible to pinpoint whether the success is due to the core algorithm or if the peripheral add-ons were doing all the heavy lifting. Rahimi thinks this is an unhealthy state of affairs and urges greater attention to explanations and finding root causes. He must have been onto something, because his talk received 40 seconds of standing applause from the audience.

Not everyone agrees with Rahimi, however. According to Facebook’s Yann LeCun, Rahimi is fundamentally wrong because while understanding is certainly good wherever you can get it, understanding often only follows the creation of methods, techniques, and even tricks. To then insist that the creation of new technology only takes place where understanding is possible, would be to cripple innovation. He even makes this claim concrete by arguing that this was precisely why neural nets didn’t get the attention they deserved for over ten years.

Still, I get a sense that both Rahimi and LeCun are arguing past each other, because there’s no indication that Rahimi wants the kind of comprehensive understanding that would stifle innovation, as much as a more rigorous approach to avoid pitfalls. In a recent paper, for example, he calls for measures like

  • Breaking down performance measures by different dimensions or categories of the data

  • Full ablation studies of all changes from prior baselines should be included, testing each component change in isolation and a select number in combination.

  • Understanding of model behavior should be informed by intentional sanity checks, such as analysis on counter-factual or counter-usual data outside of the test distribution.

  • Finding and reporting areas where a new method does not perform better than previous baselines.

These are clearly not intended to stop progress, but to ensure a more sustainable model of growth. Still, the question of whether this will actually generate better results is one that cannot be answered through armchair philosophy — we’ll simply have to give these methods a shot and see if they prove fruitful.

The State of Automation - Part 4

During previous weeks we’ve been analysing the impact automation and disruptive technologies will likely have on the publishing industry. We’ve explored the innovations on the horizon and how the different roles in book publishing will be affected by them in the short, mid and long-term future.

Automation will have a massive impact on publishing, there is no doubt whatsoever about that. But whether this impact is negative or positive depends greatly on the industry response. Will publishers let innovation happen to them? Or will they act quickly to understand how new technologies work and can be applied to their organisations, then evolve their working practices and reskill their workforce accordingly?

In The Book Industry Study Group’s “State of Supply Chain” survey conducted earlier this year, 33% of respondents said they were somewhat or very concerned about the potential to be replaced by technology or artificial intelligence. This week, in our final post of this four-part series, we look at survival and what publishers, and those who work in the industry, can do to confront the new reality of what many are calling the fourth industrial revolution.

Knowledge is power

If the last 20 years have taught us anything it’s that rapid innovation can, and will, gobble you up if you’re not prepared for it. And most industries have suffered, some more than others, at the hands of disruptive technologies they were completely ignorant about and ill-prepared to respond to. This is a lesson we all must learn from.

Publishers, who traditionally tend to adopt a rather cautious approach to new technology, will need to know exactly what is around the corner when it comes to automation. Not knowing will mean not being able to respond quickly enough when the world around them is transforming at break-neck speed.

Publishing houses which are aware of these developments, those prepared to take an open-minded approach and start to experiment, and those proactively seeking ways to use automation to their benefit, will automatically be in advantageous positions.

Humans are (still) essential

A survey conducted by Evolve in 2016 revealed that the most in-demand skills in the workplace are “the ability to work cooperatively, flexibly and cohesively”. These soft skills are areas where humans usurp robots (well, at least for the next 15 years, which is when experts are predicting computational power will equal the human brain). Recognising this is key.

While AI will do a fantastic job at automating a variety of tasks, in most cases the incorporation of AI technology is at its most powerful when it interacts with humans and benefits from the creativity, imagination and judgement of the human brain. To this end, being able to harness automation-driven technology and play to its strengths but also to align it with human capabilities, will give publishers an edge.

In the real world, this can be applied in the editorial department, for example, where AI can be used to do the heavy lifting when it comes to proofing manuscripts, but the process will still need to be overseen by human eyes. Or in the production department where AI can be applied to a great deal of production tasks, however taking judgement calls and making business-critical decisions on print runs, for example, will still need to be made by humans.

Next gen workforce

Many believe that in a world of automation the only people who will survive will be those who came out of the womb coding and that only employees with an intimate understanding of the latest tech will be of any use in the future. Although rather exaggerated, this is to some extent true. As technology will play a much more influential role in our working lives, job seekers who are tech savvy and can prove that they have the ability to work alongside the latest innovations will always have a cutting edge.

However, on the other side of the coin another view is that widespread automation will make those who have heightened emotional intelligence and a softer skill base more in demand, as reflected by this article on “automation resistant skills” in the BBC.

Either way, it’s highly likely that those who present an innate understanding of technology and a willingness to work with it, while also demonstrating a range of emotional skills will be the most likely to thrive in an automated workplace, and it is these types of candidates who will be most valuable to publishers.

Automation is going to change book publishing as we know it beyond all recognition. It will be as gradual as it will be sudden. It will be as beneficial as it will be damaging. Publishers will flourish and perish, and employees will gain and lose. This is what has happened during every major period of disruption since the dawn of time. But the industry has a small window of opportunity to at least learn about how the publishing business might be affected and what sort of steps can be taken to exploit opportunities afforded by automation as opposed to getting left behind.

Ignore the Headlines and Embrace the Bots

In 2018, bots became even more prevalent in the marketplace. According to a study by Distil Networks, a leading bot security company, almost half of web traffic (42.2%) in 2017 was not human. Though some may find this trend surprising or even alarming, bot traffic has been growing consistently for the last five years as more companies add bots into their workflow systems. What has proven a growing concern, according to the media, is the influx and rise of “bad bots.”

Bots can be incredibly helpful by processing mundane or repetitive tasks and allowing humans the opportunity to do more creative, thoughtful work. Bots have been adopted to conduct customer service tasks, help curate individual products for users, among other activities. But, there are also bad bots, which were first taken note of when used to buy tickets online and then offer the same tickets for a much higher resale value. These bots are also responsible for stealing personal information, social media harassment, disrupting the marketplace, and, in the largest show of bot activity, potentially impacting the 2016 US presidential election. The presence and prevalence of bad bots is increasing too, with bad bot traffic up 10% last year, slightly outflanking good bot traffic (21.8% of total web traffic is bad bot vs. 20.4% for good bots).

What makes bots unique is that they tend to mimic, and mimic very well, human behavior. That is what makes bad bots particularly difficult to battle, because they are often very difficult to detect. The existence and growing pervasiveness of bad bots adds to the public concern about the implications of artificial intelligence and whether or not AI can “turn against humans.”

But, like with any technology, security and defense systems are being developed to thwart bad bots. The first legislation, the Better Online Ticket Sales (BOTS) Act passed in September 2016, was to deal with the aforementioned ticket-buying bots (though this continues to be a problem despite the legislation). An op-ed in Fortune earlier this year, calls for both private security implications and government intervention through creating or updating additional legislation that would levy heavy fines and penalties for those parties creating bad bots.

Some in the technology world are leading the charge against bad bots, including Twitter questioning 9.9 million accounts thought to be spam or bots, creating more sophisticated authentication procedures, and preventing an average of 50,000 spam/bot accounts a day from being set up.

Though bad bots are a problem and a threat to the marketplace, they should not overshadow the use of good bots to increase efficiency, improve systems, and analyze data in a variety of industries. Headlines scream that bots are bad, but, in reality, half of the bots out there are refining processes, allowing for further creativity, development, and increased revenue.

As Harley Davis, Vice President, France Lab and Decision Management, IBM Hybrid Cloud writes in a February blog post, “Businesses need solutions that assist in automation rather than simply fulfilling it, handle tasks intelligently and are highly autonomous. These solutions also must deliver customer-centric and personalized experiences, at enormous scale, without a massive back-end operation to prop them up.” The next generation of bots will not simply conduct mundane, repetitive tasks, they will be able to adapt as a company grows and changes, taking on each challenge intelligently. Being able to have a system that fluctuates as goals and needs change is crucial to progress and advancement as the marketplace transforms.

#CockyGate and the Perils of Trademark Bullying

Trademarks are among the most important ways creative professionals can protect their brand and ensure their fans can easily identity their work, as well as protect themselves from similar products from others. But trademark allocation brings up tough questions about what a reasonable trademark would consist in, and at what point trademarks are being used unfairly to stifle competition.

Since 2016, novelist Faleena Hopkins had been writing romance novels in her ‘cocky’ series, for example “Cocky Roomie” and “Cocky Biker”. She had written 19 books and sold 600,00 copies in this series, and so wanting to protect her brand, she decided to trademark “cocky” to keep copycat authors from riding on her coattails. When her trademark registration was issued in April 2018, she sent out notices to several authors with books with “cocky” in their title, informing them of the trademark violation and asked them to either change their titles or face legal action.

Initially, a few authors complied with her demands. Jamila Jasper had published a book titled “Cocky Cowboy” in March 2018, with had the same title as a book Hopkins had published in September 2016, and was one of the authors to receive a cease and desist letter. She shared a screenshot on Twitter:

She wrote on her blog that she decided to err on the side of caution and unpublish her book, and instead republish it after renaming it “The Cockiest Cowboy To Have Ever Cocked” and paying money to redesign its cover. Although she said she’s trying to remain optimistic, she admitted that “it hurts to be attacked and it hurts to have your integrity questioned”. She also argued that it is “exceedingly common” in the romance publishing industry to have similar, even identical titles, and that therefore was incredibly unfair to demand other authors take down books they’ve already published and to ask them to refrain from using “cocky”, an incredibly common descriptor in this genre.

The internet agreed with Jasper, and a massive online backlash was unleashed against Hopkins. Writers piled on in her social media accounts, with negative comments far outnumbering likes and retweets/shares. On facebook, she was inundated with comments from authors and readers, declaring that they were going to boycott her. On sites that allow for reviews like Goodreads and Amazon, her books were hit with negative reviews which explicitly referenced how she had targeted indie authors who didn’t have the resources to fight her in court.

Eventually, the Authors Guild and Romance Writers of America filed suit against her trademarking a common word, and won their challenge, with the judge ruling that Hopkins’ desired “preliminary injunction censoring the continued publication of various artistic works is unwarranted and unsupported”. With her then deciding to step down from her trademark battle, finally the #CockyGate saga came to an end.

While this particular incident might have ended well, it also reveals that the process through which trademarks are approved allows authors to overstep and try to trademark overly generic phrases, whether intentionally or otherwise. One innovative way to battle this is CockyBot, a twitter bot that automatically finds and tweets fiction-related trademark applications filed from the US Patent and Trademark Office’s database.

For each application, CockyBot tweets out the phrase being trademarked, the status of the application, the documents submitted, and an Amazon search link of products that might be related to the phrase.

While most of the applications seems acceptable, there are the occasional generic terms like “dragon slayer” and “big” also included. Clearly, not everyone has learnt from the Hopkins affair.

However, we need to remember that while the kind of behaviour Hopkins engaged in might be unacceptable, trademarks are an essential part of how creative professionals make it and survive. To ensure that such incidents don’t repeat, we need a way to ensure that authors can check for similar titles on the market, while not letting frivolous trademarks impede them.

A possible solution is technological. While CockyBot is certainly a step in the right direction, it still relies on human users having to look through the Amazon search list themselves to check if there are any products from other creators containing the word or phrase that’s being trademarked. What we need going forward is a way for authors to check whether the title they are planning to use in already in use, as well as whether it would be violating someone else’s legitimate trademark. 

As the number of books hitting the market increases, and new authors try their hand at writing for niche audiences, it is no longer possible for each person to be mentored individually and taught their way about the industry. Luckily, tech solutions like well-crafted automation can be of enormous help to these newcomers, helping them avoid pitfalls that they might not even imagine were problems.

— 10 Pozycji na stronę
Wyświetlanie 1 - 10 z 42 rezultatów.