Trust, but verify

One popular conception of science stresses the need to always question, to always remain skeptical. However, given that scientific work requires the coordination of a massive number of people scattered across the world and across disciplines, it is the ability to trust the work others are doing that allows scientists to build on with their own. The obvious question then is why do people trust each other?

In his book A Social History of Truth, the historian and sociologist of science Steven Shapin offers a surprising answer to how this trust initially came about. Science until the mid-19th century was primarily the pursuit of gentlemen. Birth, wealth, and behaviour were used to judge who was and wasn’t reliable. If a man was wealthy (and it was always a man), it was assumed that he had nothing to gain and plenty to lose in lying about results, since he was financially independent and was embedded in a culture of honour. Gentlemen trusted each other not because they naively believed good science was inevitable but because of non-scientific facts about their mutual social status.

Of course as time passed, this gatekeeping of science ended and anyone (in principle at least) could pursue science. In this context, why trust anyone else? Of course, most scientists are committed to truth-finding, and the repercussions of being found out serves as a strong deterrent to anyone tempted. But in our era of publish-or-perish, short term cheating and sloppiness might still be tempting to many. In fact, there is already a prominent case of this happening.

In December 2014, then UCLA political science graduate student Michael LaCour and Columbia University political science professor Donald Green published a paper in Science titled “When contact changes minds: An experiment on transmission of support for gay equality.” According to this, door-to-door canvassers who were gay were better than their straight counterparts in convincing voters to support same-sex marriage in the long-term. The study was picked up and touted in several major media outlets including The New York Times, The Washington Post, and The Wall Street Journal. By chance, two grad students at UC Berkley, David Broockman and Joshua Kalla, were trying to carry out a similar study and during their attempt to replicate LaCour and Green’s result, realised that the original paper had fabricated its data. They published their expose “Irregularities in LaCour,” and the paper was retracted.

This episode itself is fascinating, but what I would like to draw attention to is how such an error had occurred. Green, although the senior researcher, had never even seen the data which LaCour had fabricated and had instead taken it on faith. When later asked why, Green saidIt’s a very delicate situation when a senior scholar makes a move to look at a junior scholar’s data set. This is his career, and if I reach in and grab it, it may seem like I’m boxing him out.” In response, Ivan Oransky, aco-founder of Retraction Watch said, “At the end of the day he decided to trust LaCour, which was, in his own words, a mistake.The New York Times article where both of them were quoted summarized with “The scientific community’s system for vetting new findings, built on trust, is poorly equipped to detect deliberate misrepresentations.

What this episode reveals is that our procedures are, for the most part, still based on trust, making it vulnerable. Reflecting on the LaCour retraction, C. K. Gunsalus, Director of the National Center for Professional and Research Ethics, advocated for greater openness, even titling the piece “If you think it’s rude to ask to look at your co-authors’ data, you’re not doing science.” This really is a fantastic piece, but the one place I’d like to disagree is that many of the suggestions place all the responsibility on authors themselves to institute good practices. I think a better idea is to try to build a culture of responsibility institutionally rather than on individual choice. If collaborators feel uncomfortable asking each other for data or their sources of funding, then the only way around this is to mandate that they do so.

Of course, even this won’t stop all fraud. Multiple authors can still fabricate results together, and can be too lazy and lie about verifying colleague’s work. And this would probably feel too top-down for some academics, who might feel having to fill-in institutionally mandated information at every significant stage of their work tiresome. But if we want a culture of robust checks and balances, we need to start working towards such a framework.

How the History of Peer Review can help us think better about change

Tech thrives on disruptive innovation, so it comes as no surprise that publishers regard proposals from publishing tech with suspicion — after all, why change something that works? At least part of this resistance can be traced to a tacit understanding of the current system as having been in place for a significant amount of time. A look at history disabuses us of this.

Consider modern peer review. For anyone even loosely associated with academia, the process of submitting a draft to a journal, which will be anonymously evaluated by two or three referees, is probably familiar and taken for granted as simply the way things are done. In a recent publication in the History of Science journal Isis, Melinda Baldwin, a senior editor of Physics Today, argues that this norm of compulsory peer review is more recent that most of us would imagine. Here’s her narrative in brief.

Although the sending of a submission to experts for their comments can be traced back to at least Henry Oldenburg, the first Secretary of the Royal Society, peer review was neither systematically carried out or seen to bestow scientific credibility for centuries. A more familiar system was proposed by William Whewell in 1831, when he proposed that submissions in the Philosophical Transactions should have two Fellows of the Royal Society comment openly in the new journal Proceedings of the Royal Society of London. Whewell’s proposal for reports to be published was never picked up, but it became increasingly common to send submissions out to anonymous referees.

Still, even in the mid-20th century, it was not uncommon for all editorial decisions to be made in-house, with editors making all the decisions about a paper and only consulting external referees occasionally, whenever they deemed it essential. The shift to external peer review was brought about because of the increasing amount of work that editors had to do. For example, editors at Science reasoned that “the job of refereeing and suggesting revisions for hundreds of technical papers is neither the best use of their time nor pleasant, satisfying work.” It was simply the increased burden that gave rise to the popularity of external review.

As for the perception that peer review was crucial to scientific legitimacy, Baldwin argues that we need to look at the specific history of the late 20th century United States. The Cold War led to a ballooning of science spending, and soon this increase was noticed by the public and came under scrutiny and skepticism. Under pressure to become more accountable to non-scientific political actors, modern peer review was touted as the only solution that could ensure both scientific autonomy and public accountability. At the end of this saga, Baldwin argues that it was accepted that any [scientific] organization had to rely on external referees in order to judge “good science” properly.

The point of looking at this fascinating history is not to simplistically argue that peer review should be done away with because it is new or because it has a history that is embedded in a particular political culture. After all, any aspect of how things are done can be deconstructed in this manner. However, what histories like this make clear is that no part of the rules we abide by and the institutions that bind us are eternal or set in stone. Any proposed change, however initially surprising, should be given a fair shot instead of being resisted because of procedural inertia or complacence. Change is always around the corner, however solid our present world may appear to be.

Why the indies need Artificial Intelligence

This week, I was fortunate enough to address a large group of publishing industry leaders at the IPG (Independent Publishers Guild) Spring Conference in a wide-ranging discussion about Artificial Intelligence and its impact on a range of industries, including publishing.

It was encouraging to see so many publishers unjaded by the AI hype which has taken hold over the years and still as eager as ever to debate and explore the merits and benefits these technologies can bring to their organisations.

Attendees were keen to understand the intricate details about the ways in which AI could change our day-to-day working lives and what potential savings on resources, efficiency and budgets could be made as a result.

It may not be all that obvious to many, but I’ve always felt the indy sector has the potential to be something of a hotbed for pioneering new age technology like AI. While budgets might be smaller than larger publishers who can afford to splash vast amounts of money on innovations and technologies, there are a number of reasons why the AI revolution in publishing could start here.

Indies need AI

The small and medium size publishers who traditionally make up the independent publishing sector could arguably benefit more from AI embedded in the publishing workflow than any other sector. With tight profit margins, limited or stretched resources and manpower, and processes which often end up getting outsourced and freelanced out, AI can make editorial and production procedures far more efficient and cost-effective. For example, it’s now possible to take an unstructured manuscript as a Word document and run it through an ingestion process, which produces tagged and structured XHTML within a 5–10-minute period. This is a process which often takes publishers days to carry out and subsequently eats up several staff members’ time, time which can be better spent on other tasks, time which is ultimately money.

Money well spent

At the moment, most publishers, and especially those in the indy sector, are using human beings to carry out tasks which can be done by machine learning. Often, these tasks are off-shored to pre-press businesses, as I’ve mentioned, which result in a financial burden and significant overhead for many publishers. Automating these processes can help the conversion process from raw Word document manuscripts to tagged and structured XHTML, improving and enriching the metadata during this process. Once content is converted to XHTML, other previously manual processes can be carried out — for example, pushing the content into InDesign layouts removes up to 80% of manual intervention in this process.

When this is repeated across an entire list of books or journals, considerable cost and time savings can be made. We estimate that embedding AI in the workflow can free up about 40 per cent of employees’ time in production and editorial departments.

So not only can publishers recoup a lot of money spent with pre-press outsourcing, but they can also start to get the best out of their existing production and editorial staff who can let the AI do the heavy-lifting and mundane, repetitive work, and instead turn their focus to more business-critical, creative or higher-level tasks.

The indy publishing sector has a lot going for it. These scalable, dynamic businesses have all the potential to become innovators and forerunners in the AI race. The business case for incorporating AI and machine learning into indy publishing workflows is far stronger than the rationale for implementing most other technologies on the market. And it’s simply a matter of time before we see indies using AI in their workflows to become leaner, meaner, efficient and cost-effective organisations.

Everyday Rights Management for Publishers

Rights management for publishers seems to be a hot topic, with people extolling their virtues at conferences and think pieces released almost every month. But it might be time to pivot the conversations we have away from exclusively talking about Digital Rights Management and blockchain solutions to include the mundane, day-to-day work of rights management.

To start off, a useful primer from the World Intellectual Property Organization (available here) points out that there are at least four distinct asset types for which rights management is relevant:

  • Titles in the publishing house catalogue for the current year, as well as the backlist

  • Contracts with authors which grant the publisher the right to publish and sell

  • Sub-licensing

  • Publishing for other and different readers through digital means like print-on-demand, or digital format.

None of these is simple, of course — books contain copyrights for the text, illustrations, photographs, etc., each of which can be subject to different contracts. And apart from individual contracts, there often are laws governing intellectual property that need to be complied with, some international (like the Berne Convention for the Protection of Literary and Artistic Works) and others vary by country and type of use (like the EU’s Directive on Copyright in the Digital Single Market).

For reasons like this, it makes sense for publishers to invest in a system that helps maintain records of contracts, instead of relying on the surprisingly common approach of maintaining multiple excel sheets. The advantage of a specifically designed system is that it can be customized, allowing for publishing-specific functions. For example, assets can be tracked, helping keep track of usage across different editions. Since permissions are usually given for a certain number of uses, automatic prompts can ensure you are never in violation of the law.

In addition, the system can be made to follow rules that ensure compliance with the law, and these can be periodically updated. The people in charge of constantly updating the system based on new legal changes do not have to be the ones actually keying in information, allowing for the efficient division of labour.

All these issues concern only the storing of data on rights, and it’s valuable to stress this component for two reasons. First, a surprising number of publishers — and this cuts across size, region, genres published — still use dated systems of manual storage which could be updated with very little investment. Second, a lot of newer systems will have to be built on top of this basic system, which means unless there is a system in place, talking about more advanced tech would be moot.

Of course we do not want to stop at talking about rights storage, and so important topics to discuss will inevitably include options like Digital Rights Management (DRM) and contract management.

In brief: instead of just storing information, DRM refers to access control technology that sets limits to the use, modification, duplication, and distribution of copyrighted information. Individual assets can be embedded with metadata, making sure the information is available even outside the publisher’s system. For books, this can include software restrictions that control access of assets, such as Adobe Digital Editions’ proprietary DRM, Apple’s FairPlay DRM, and Amazon’s Mobipocket. DRMs aren’t without their critics — it has been argued that their use by the big six publishing groups helped Amazon monopolize the ebook market, but they do sound like our best bet to stem the tide of piracy.

As for contracts, as we have written before, smart contracts using blockchain can digitally facilitate, verify, and enforce an agreement between two parties in a transparent and trackable way. This technology is already being developed and implemented by publishing tech, meaning this is less about a theoretical possibility and more about shaping current tech.

Clearly, there are emerging technologies from which we cannot remove our gaze. But as exciting as ideas like DRM and smart contracts are, too often these are thought about in isolation instead of as components in a complex publishing ecosystem. To combat this, we need to contextualize these by thinking about the less shiny aspects of rights management — like the databases where rights managers work on a day-to-day basis. There’s an obvious temptation to fixate on the cutting edge of a field, but this might miss out on a lot of the everyday work of publishing professionals that might be less exciting, but no less essential.

What three studies tell us about automation in the workplace

One of the most popular topics we regularly tackle on this blog is automation, and the impact technology such as AI, Machine Learning and robotics is having, and will have, on the job market and the way we work.

In recent weeks, both in the US and UK, some interesting studies have been carried out on this hot topic by Pew Research Center and the Office for National Statistics (ONS), respectively. Meanwhile a survey entitled “Humans Wanted: Robots Need You” was conducted by recruitment company ManpowerGroup across 44 different countries, looking at the incorporation of bots into the working world and what this will mean for employees globally.

Dangerous and dirty

The Pew study examines the attitude of Americans in the light of increasing workplace automation, pulling together insightful charts and graphs from a range of public polls produced by the Center recently.

It concludes that while most Americans anticipate widespread disruption in the coming decades, few believe automation will affect their own job. Meanwhile, three quarters of Americans view job automation in a negative light with around half of respondents claiming automation has, to date, done more damage than good.

The general public is broadly supportive of automation replacing “dangerous and dirty” roles and is vehemently in favour (85 per cent) of seeing restrictions put in place to limit automation to only replacing jobs which are deemed too dangerous or unhealthy for humans.

Interestingly, when asked about whether the government or the individual should assume responsibility for helping workers who are displaced by the introduction of robots in the workplace, there was a split down the middle across party political lines.

1.5m jobs on the line

Meanwhile, across the pond, the Office of National Statistics (ONS) study states that 1.5m people in England are at high risk of losing their job. Having created a bot to analyse the jobs of 20m workers, ONS concluded that 7.4 per cent of these are at high risk of being replaced, with women assuming the highest risk by occupying 70 per cent of these roles.

There are some interesting correlations between these two studies. Both concur on the types of roles facing disruption — hospitality staff, retail assistants and sales workers top the high-risk list, while those working in medical professions and education are widely considered lower risk. Both studies also agreed that young people and part-time workers are particularly vulnerable to workforce automation.

Silver linings?

While these two studies paint an overwhelmingly bleak picture, the ManpowerGroup survey is, on the surface, far more optimistic in its outlook. The report’s overarching message is that humans and robots can coexist, and that automation needn’t be something to fear but something which will provide us with a wealth of new opportunities. It claims that 69 per cent of employers are planning to maintain the size of their workforce, while as many as 18 per cent actually want to hire more staff as a result of automation. To launch the study, Chairman and CEO of ManpowerGroup, Jonas Prising, said: “More and more robots are being added to the workforce, but humans are too.”

But if you scratch beneath the surface, the situation isn’t actually as peachy as they appear to want you to believe. The study states that “just” nine per cent of employers believe automation will lead to job losses. On paper that may not seem like a particularly high percentage, but in reality it is a very high number indeed.

Spin it how you want, automation will give with one hand and take away with the other, it will optimise some jobs and replace others, it will strike fear into some and have others in a state of excitable rapture. The world of work is changing around us as we live and breathe, and these interesting studies, however depressing they may be, offer useful insights and a valuable yardstick on the evolving attitudes of employers and workers during very uncertain times.

Last summer we discussed how automation is likely to affect different roles and tasks within the publishing ecosystem over the course of four blog posts. To find out how your job might be affected by the rise of the robots check out our The State of Automation series here: part 1, part 2, part 3, part 4.

Blockchain, Coming to a Computer Near You

Last year, Facebook was front page news when it came to light that Cambridge Analytica had obtained data on hundreds of millions of Facebook users through third-party apps. This week, Facebook CEO Mark Zuckerberg told ABC News that it is “still looking into” the claim that personal information for millions of users is easily available on Amazon.com Inc’s cloud servers. While Facebook is investigating this, what are users supposed to do? That is where blockchain might come into play.

Previously, I have written about blockchain and how it applies to publishers and content creation, but will this technology expand to help police look into how users interact with the internet and verify their identity as a whole? This week, while Zuckerberg was calling for Congress to regulate Facebook, PayPal invested in Cambridge Blockchain, a startup working to give individuals a way to own their own identity online. Akin to how blockchain allows bitcoin users to store value without a bank, blockchain may allow users to verify identity without an intermediary like Facebook.

While PayPal surely see this as something its users can benefit from for online financial transactions, this technology could have wider implications that provide safe interactions online for users of all kinds and change online communication and collaboration in a remarkable way. When you consider how many different corporate entities own our data — from banks to retailers, social media networks to airlines — we can see just how exposed we all are to data infringements, cyber-attacks, identity theft and fraud, especially as we don’t actually know just how robust and secure these companies’ data infrastructures actually are. As blockchain applications proliferate the marketplace we should start to see this balance redressed and consumers taking back control of their data. Though it’s still too early to tell what might happen in the future until the technology is used, this investment by PayPal should give users some peace of mind that they can protect themselves from identity theft in the future.

Why Preprint repositories are essential to academic work: A Case Study

There is a lot of talk about peer review and how it can be made better, but unfortunately, a lot of this happens at a level of abstraction that makes it easy to miss more modest changes that can go a long way. 

For example, a common way of proceeding in certain sciences is the pre-publication review, according to which manuscripts are uploaded online for open discussion before official peer review and journal acceptance, giving the community at large an opportunity to review results and methods. The advantage of such a process is that it makes the peer selection process far more transparent, but on the downside does not allow for anonymity for either author or reviewer. The downside might seem like it clearly isn’t worth it, since anonymity is accepted as an obvious virtue. But a real life case-study indicates why it might be worth the price.

A recent, real-life example of how a larger pool of peers might be more effective than two anonymous peer reviews can be found in a recent incident surrounding an arXiv submission. arXiv.org is a site for the submission of pre-prints of papers in Science and Math. In 2018, two researchers from the prestigious Indian Institute of Science, Dev Kumar Thapa and Anshu Pandey, posted a paper at arXiv, where they claimed to have discovered an instance of superconductivity at room temperature in “a nanostructured material that is composed of silver particles embedded into a gold matrix”. If true, this could have been a game-changer for material science and really, all of society since we could theoretically transfer electricity without any loss.

This pre-print caught the eye of a Postdoc at MIT, Brian Skinner, who probed into the data a little more and found some odd features:

Skinner wrote up his observations and posted it on arXiv himself. The story was quickly picked up on various sites, including Nature, Scientific American, and Wired. The authors, for their part, seem to have dug in their heels and have not admitted to any wrong doings.

Most relevant for the broader point about opening up peer review is that Skinner is not an expert in the field of superconductivity, so he probably wouldn’t have been a potential reviewer for the paper in question at all! And his decision to “zoom in closely” on the data isn’t a standard method for vetting papers, so if the pre-print hadn’t been posted somewhere relatively public, this discrepancy would have gone unnoticed, and the paper would have been published. The best case scenario then would be retraction.

Of course, there is the lingering question of whether such a model could be extended outside certain sciences. For example, it has been pointed out that medical journals might resist this because making results public prematurely might impede the ability to get proper press attention after full publication. And there are questions about whether the lack of anonymity at the preprocess stage would effectively do away with anonymity since the authors will already be known from the pre-print. So this is far from a knockdown argument. But I suspect one reason pre-prints aren’t more popular is simply that many people outside the sciences haven’t heard of them, but that at least can be addressed easily enough.

Trending now — AI ethics

In a significant move this week, Google announced the formation of an external global advisory council designed to offer “guidance on ethical issues relating to artificial intelligence, automation and related technologies”.

The Advanced Technology External Advisory Council (ATEAC) will consist of eight leading academics and policy experts from around the world, including former US deputy secretary of state William Joseph Burns, the University of Bath’s computer science professor Joanna Bryson and mathematician Bubacarr Bah, who will meet for the first time in April and on a further three occasions throughout the year.

This move doesn’t necessarily represent a sea-change in the tech giant’s policy and attitude towards AI ethics - the company had already established internal councils, panels and review teams to confront the challenges posed by AI and related technologies. Last June it published its seven guiding AI principals, outlining its approach towards the adoption of AI. However, notably it is the first time Google has sought worldwide expertise on AI to inform its overall strategy, and it will be interesting to see how this development impacts the company’s future business decisions, which have often come under a great deal of criticism.

Google is not the only tech powerhouse looking at ethics and how it goes about adopting, investing in and incorporating AI innovations. Perhaps coincidentally, just a day before Google launched the external advisory council at the MIT Technology Review's EmTech Digital conference, Amazon had revealed a collaboration with the National Science Foundation and a $10m cash injection to help develop systems based on fairness in AI. Meanwhile over at Microsoft, Harry Shum, executive VP of its AI and Research Group, had also announced at the very same conference that it will be adding “an ethics review focusing on AI issues to its standard checklist of audits that precede the release of new products”.

The discourse around AI, particularly coming from the heavy hitters in Silicon Valley, has certainly changed, that much is clear. And whether this is down to pressure being applied on these firms to adopt a less gung-ho and more measured approach as they slog it out on the AI innovation battlefield, remains to be seen.

But is it realistic to expect the likes of Google to genuinely care about AI ethics in so far as they are prepared to start prioritising these issues above their own sizeable business interests? This week the general mood at the summit in San Francisco was sceptical. Rishida Richardson, director of policy research for the AI Now Institute, was quoted in Reuters as saying:Considering the amount of resources and the level of acceleration that's going into commercial products, I don't think the same level of investment is going into making sure their products are also safe and not discriminatory.” 

While AI ethics may now be at the forefront of the agenda at conferences such as EmTech Digital, companies are still not being held accountable by the necessary regulation and legislation to keep them in-check and ensure that their roll-outs are responsible and ethical. In the absence of a single, global regulatory body operating in the field of AI, large tech firms are pretty much left to their own devices to self-regulate and develop AI-driven products and services without any directives or consequence. It’s a dangerous situation, and one which has led to several high profile, real world incidents whereby AI-based innovations have been rushed through and members of the general public have paid the price.

If we want the tech giants to offer more than lip service and tokenistic gestures on ethics in AI, maybe now is the time the industry should consider introducing independent regulation to enforce ethics rather than just talk about them.

BISG and PageMajik Survey Shows Publishing Workflow in Need of Rethinking

This piece was originally published in the Publishers Weekly Book Brunch London Book Fair Show Daily

When the digital revolution began over a decade ago, publishers were forced to examine their decades- old way of doing business. The move to digital forced publishers to look for dramatic ways to improve efficiency and keep up with a market they struggled to recognize. Unfortunately, the processes that followed were often a digital version of an existing system, barely improving productivity and, in some cases, creating additional unnecessary work.

To learn more about pain points in the publishing workflow, PageMajik and Book Industry Study Group (BISG) last fall partnered on a survey of publishing professionals. The goal: identify issues and offer workflow solutions that would help both the industry and individual publishers.

The survey revealed that 17% of respondents spend 25–50% of their time doing repetitive tasks, while 47% of respondents said repetitive tasks take up 10–25% of their time. Of those repetitive tasks, 58% of respondents felt that some of those tasks were avoidable. And, over half the respondents also said they could be more effective in their jobs if repetitive work was eliminated.

Among the largest time-wasting activities, according to respondents, were updating metadata, providing the same information in multiple reports, tracking projects in various formats, and outlining assignments.

A system that automates some of these processes would provide publishers with both efficiency and time. In turn, those publishers could focus on higher-level product development and related strategic work, such as acquisition, design, and promotion.

The conversation about workflow best practices doesn’t end with the survey or this article. On March 28th in New York, the Book Industry Study Group (BISG) will host a meeting focused on cloud-based workflows. Structured as an interactive, two-hour workshop, the program will solicit even more information about the challenges publishers and the book industry face.

PageMajik is also continuing to explore these challenges and share its views on how to address them. For more information about the survey or to discuss your particular workflow challenges and how we might help, please visit me at the PageMajik booth at Stand #3B08.

Jon White is the Global Vice President of Sales & Marketing at PageMajik.

Marshall Cavendish Education launches pilot with PageMajik

Leading Singapore-based education publisher Marshall Cavendish Education will be piloting PageMajik’s publishing workflow-based Content Management System. The roll out will happen in stages, upon the successful completion of the pilot.

Marshall Cavendish Education produces more than 400 curriculum-based titles each year, and, working with PageMajik, the publisher’s authors, editors and designers will be able to work together on one intuitive platform to improve collaboration, streamline workflows, and assist in meeting deadlines.

Richard Soh, Manager of Publishing Systems and Administration at Marshall Cavendish Education commented: “We are very excited about working with PageMajik. We anticipate that the product will dramatically improve the way we produce and publish content across the organisation, bringing more speed and efficiency into our publishing processes.”

Ashok Giri, CEO at PageMajik stated: “Marshall Cavendish Education has a magnificent history and heritage in education publishing and we are delighted to be working with the company to implement our product across their business. We are really looking forward to this collaboration and are confident that the PageMajik system will bring about positive change to the way Marshall Cavendish Education develop and produce content.”

 

About Marshall Cavendish Education

A subsidiary of Times Publishing Limited, Marshall Cavendish Education is the leading provider of distinctive K–12 educational solutions in Singapore, providing Singapore schools with innovative, high-quality content and solutions. 

For 60 years, Marshall Cavendish Education has constantly developed solutions.to ensure educational excellence and has earned the approval of the Ministry of Education, Singapore.

Headquartered in Singapore, Marshall Cavendish Education has offices in Hong Kong, China, Thailand, Chile and the United States. The brand is also recognised worldwide for its work in ensuring excellent educational standards and for continuously raising the quality of learning around the world, inspiring students and educators to learn and teach more effectively.

For more information, please visit www.mceducation.com.

 

About PageMajik 

We are a 40-member team comprising experienced industry professionals and tech wizards with relevant domain experience in both the publishing and the software development side. Our core team has worked with the publishing industry for a combined 10 decades and has been able to use the experience to develop a truly revolutionary product. We listen to the needs of our customers, and incorporate forward-facing ideas into the development of our solution. Our product is ever-changing as we are constantly trying to improve the experience for our users.

For more information, please visit www.pagemajik.com.

S'estan mostrant 1 - 10 de 63 resultats.