The Shinano Mainichi Shimbun, a regional Japanese newspaper, is incorporating artifical intelligence into the newsroom and using it to write summaries. Read more about the new technology.
The Shinano Mainichi Shimbun, a regional Japanese newspaper, is incorporating artifical intelligence into the newsroom and using it to write summaries. Read more about the new technology.
Would you care if a story you read in a newspaper or online was “written” by a machine rather than a stressed-out hack? Would you even be able to tell the difference? Welcome to the world of “robo journalism” – and it’s coming faster than you think.
Some 99 drafts of scientific papers have been generated so far by a manuscript writer launched three weeks ago, according to the electronic lab notebook company sciNote
What’s news? Well, it’s not what it used to be, and probably not what it’s going to become.
Over the past decade, the landscape of news and information has shifted dramatically. Journalists now routinely mine huge data sets to uncover hard-to-find stories. Automations fire off news headlines at sub-second speeds. Rich multimedia presentations are the rule, not the exception. Platforms like Facebook and Twitter dominate the distribution of news. And new startups are trying out and testing new business models every month.
At Reuters, we track all of these changes closely – not only because we’re one of the world’s largest news organizations, but also because we supply news and information to thousands of other newsroReoms. As Executive Editor for Editorial Operations, Data and Innovation, the pace and shape of change is something I observe with special focus. What’s clear is that all this is likely only the beginning of even more seismic change in the industry.
For all the upheaval in news, some core parts of the business remain nearly untouched. People still create most of the content. Stories are still created for mass audiences. Technology is more a tool than a partner. All of that will likely change.
We’re already rapidly adopting new tools to help us find news faster. We use Reuters Tracer, a technology developed by the Thomson Reuters Research and Development team that algorithmically detects newsworthy events breaking on Twitter and rates the likelihood that they’re true so that reporters can get a head start on confirming the news.
But it goes far beyond just using technology to help us do what we do more effectively. It’s using technology to rethink what we do and how we do it.
For example, language generation systems – technologies that can understand documents, analyze data and create text – have been largely focused so far creating short stories at lightning speed, or on turning out vast numbers of relatively simple, routine stories. But machines are capable of much more, especially trawling through huge amounts of data and surfacing interesting patterns and outliers.
So far, we haven’t trusted those systems to turn them into stories and that’s probably the right call. Knowing if some trend or change is significant is something that humans (still) do better than machines. So, why not marry the two capabilities – machine analysis and human judgment – into a single system and take advantage of the strengths of both parties?
That’s an approach we’re investigating at Reuters – using technology to crawl through multiple proprietary databases, unearth insights, turn them into sentences and paragraphs, and then offer them to journalists, who can use them as tips, integrate them into stories, or simply discard them. Call it the cybernetic newsroom.
And as the system harnesses feedback from how journalists use the insights, it could steadily improve, presenting fewer but more insightful pieces of information, or even being so deeply embedded in the newsroom workflow that it would be flagging reporters to leads they could follow up on.
This capability also means we can think about delivering not just more insightful news, but also more personalized news.
Imagine if the market report didn’t come to readers just when the market closes or when journalists write it, but was instead created by a machine whenever a reader wanted it. And that the report didn’t simply recite how the market performed, but how that reader’s specific portfolio performed compared to the broader market. Or even better, if the report could analyze the key reasons for why the reader’s portfolio performed as it did, and present that information as well.
For example: “It’s 3:35 pm. The market is currently up 1 percent, but your portfolio is down 2 percent. A key reason for your portfolio’s performance is the purchase of XX stock last week, which has fallen sharply since….”
How close are we to being able to do this, and much more? The technologies are very nearly here already, and newsrooms are starting to embrace the possibilities. But there’s probably even more that we haven’t imagined yet, and that we’re just at the start of a brave new era of news and information.
For up-to-date news and information, visit Reuters.
Learn more about Reuters Tracer:
The Press Association has been awarded €706,000 by Google to develop a robot reporting project which will see computers write 30,000 stories a month for local media.
Artificial Intelligence (AI) in newsrooms has a lot of potential for smarter journalism. Yet, as newsrooms increasingly experiment with new technologies, such as machine learning and natural language processing, they also run into practical and ethical challenges. Exploring some of these issues was the motivation behind a recent conference at Columbia University in New York.
When AI fails
Success is built on failed experiments and these are certainly part of the current AI experience. Marc Lavallee, head of the Research and Development team at the New York Times, recalled one recent AI experiment that did not go according to plan.
Speaking on the panel “AI in the Newsroom: Technology and Practical Applications” Lavellee described how his team trained a computer vision programme to recognise members of Congress at the inauguration of President Donald Trump. “For some reason,” Lavallee said, “[the programme] thought all the old white dudes in the audience looked like (U.S. Senator) Al Franken.” In light of such experiences, he added, “We’re approaching this with a healthy dose of scepticism.”
Can the reality of AI live up to the hype?
Other panellists regretted that given the current hype around AI powered technology, the actual applications can’t keep up with these expectations. Sasha Koren, editorial leader of the Guardian’s Mobile Innovation Lab, noted that she found chatbots an “underwhelming experience.” Despite all their promises “that they will chat with you as if they are human,” she said, all they are “really doing is querying a database.”
As AI in the newsroom gains more attention, so does the influence of commercial companies trying to sell tailored products to newsrooms. Meredith Whittaker, who leads the Google Open Source Research group and is a co-founder of AINow, detected a tendency to “naturalize the technology,” so as to make it seem inevitable, when in fact it’s always designed by people. The actual capabilities of these programmes may not be always clear, especially as some developers are unfamiliar with the particular characteristics and standards of journalism.
What’s missing in this conversation, Whittaker said, was the question of whether, and to what extent claims by commercial companies live up to their promises. That’s of concern because these AI developers are salespeople “who don’t give us access to the algorithm, who legally and for a number of good reasons can’t give us access to the data, who assume that our input data matches whatever the data they used to train these algorithms and who are making claims about the efficacy in a field they may or may not understand…”
Artificial Intelligence and ethics
The ethical questions around AI took centre stage at the panel “Exploring the Ethics of AI Powered Products.” Some of the panellists touched on the ethical challenges at the core of AI applications—developing abstract measurements for real life problems. “We have a lot of things that we’d like to measure,” said Jerry Talton of Slack.
Talton mentioned the example of Slack trying to build predictive models that help important pieces rise to the top of online conversations between co-workers. But, he added, as predictive models can only offer correlations, the ethical challenge lies in “figuring out that gap between the things that we can actually predict and what we’re using those things as proxies for.” Implicit is the danger that predictive models give a false security of what piece of information is important.
This sentiment was echoed by Angela Bassa, of iRobot. “Math doesn’t care,” she said, indicating that mathematical models are not biased in any particular way. What makes a difference, however, is how data is being gathered. Bassa pointed out the false allure of clean data. “We’d like to imagine that it gets collected in these hermetically sealed, beautiful ways where you have these researchers in hazmat suits going into the field and collecting. That’s not how it works.”
The limitations of AI
Recognising limitations of AI was a general theme in this panel discussion. Madeleine Elish, a researcher at Columbia University and Data&Society, emphasised that just because AI technology is automating certain tasks, it should not be considered fully autonomous.
“It’s important to realise that right now deployed AI … is automating a task but in a very particularly prescribed domain.” This becomes an ethical question, she added, “when we start to assign too much power to the idea of a software program we forget all the kinds of agencies humans have over the different aspects that go into building these systems.”
Do you have any examples of artificial intelligence in the newsroom? Please share them with the EJO via comments or our Facebook page. https://www.facebook.com/en.EJO.ch/
This article is the second in the EJO series on artificial intelligence in the newsroom. You may also be interested in reading: Smarter Journalism, Artificial Intelligence in the Newsroom
The post Smarter Journalism: The Dark Side of Artificial Intelligence in the Newsroom appeared first on European Journalism Observatory – EJO.
Journalism is becoming increasingly automated. From the Associated Press using machine learning to write stories to The New York Times’ plans to automate its comment moderation, outlets continue to use artificial intelligence to try and streamline their processes or make them more efficient.
But what are the ethical considerations of AI? How can journalists legally acquire the data they need? What types of data should news orgs be storing? How transparent do outlets need to be about the algorithms they use?
These were some of the questions posed Tuesday at a panel discussion held by the Tow Center for Digital Journalism and the Brown Institute for Media Innovation at Columbia University that tried to address these questions about the ethics of AI powered journalism products.
Tools such as machine learning or natural language processing require vast amounts of data to learn to behave like a human, and NYU law professor Amanda Levendowski listed a series of considerations that must be thought about when trying to access data to perform these tasks.
“What does it mean for a journalist to obtain data both legally and ethically? Just because data is publicly available does not necessarily mean that it’s legally available, and it certainly doesn’t mean that it’s necessarily ethically available,” she said. “There’s a lot of different questions about what public means — especially online. Does it make a difference if you show it to a large group of people or small group of people? What does it mean when you feel comfortable disclosing personal information on a dating website versus your public Twitter account versus a LinkedIn profile? Or if you choose to make all of those private, what does it meant to disclose that information?”
— Simon Galperin (@thensim0nsaid) June 13, 2017
For example, Levendowski highlighted the fact that many machine learning algorithms were trained on a cache of 1.6 million emails from Enron that were released by the federal government in the early 2000s. Companies are risk averse, she said, and they prefer to use publicly available data sets, such as the Enron emails or Wikipedia, but those datasets can produce biases.
— Jon Keegan (@jonkeegan) June 13, 2017
“But when you think about how people use language using a dataset by oil and gas guys in Houston who were convicted of fraud, there are a lot of biases that are going to be baked into that data set that are being handed down and not just imitated by machines, but sometimes amplified because of the scale, or perpetuated, and so much so that now, even though so many machine learning algorithms have been trained or touched by this data set, there are entire research papers dedicated to exploring the gender-race power biases that are baked into this data set.”
— Meredith Broussard (@merbroussard) June 13, 2017
The whole panel featured speakers such as John Keefe, the head of Quartz’s bot studio; BuzzFeed data scientist Gilad Lotan; iRobot director of data science Angela Bassa; Slack’s Jerry Talton, Columbia’s Madeleine Clare Elish, and (soon-to-be Northwestern professor) Nick Diakopoulos. The full video of the panel (and the rest of the day’s program) is available here and is embedded above; the panel starts about eight minutes in.
Christoph Thun-Hohenstein, Director of MAK, the Austrian Museum of Applied/Contemporary Art in Vienna, discusses the expectations and dangers surrender this newfound modernity and introduce the ‘Hello, Robot’ exhibition the GEN Summit attendees will get to enjoy for the first social event on Wednesday 21 June, at the close of the first day of the conference.
Christoph Thun-Hohenstein: The big picture is that the world is currently divided in two different movements: one is Dataism, a movement based on the belief that the ideal state of the world is not one governed by humans, but by data, as Big Data. To reflect on these assumptions, it is considered that data is constantly parsed, updated, analysed and algorithmically optimised, and that humans don’t play a pivotal role in it anymore. The human component of this data – in the broader sense of the term – has become redundant.
The other movement is the so-called “Techno-humanism”, which is about the upgrading of humans: intellectually, mentally and physically.
In both directions, and behind those movements, lies quite a lot of power and money. These movements could be useful in some of their teachings, but they are not anything to aspire to as a whole. We should instead focus more on the kind of future we want as humans, as biological beings with consciousness, on the evolution we want to achieve. Obviously journalism plays a big part in all of this, as a means to carry this debate, expose the informations and also give the general public of the needed perspective on the times we are living and where the technology we are embracing could be leading us to.
As self-learning digital machines are getting better at mimicking the human touch, our minds are getting accordingly adept at understanding and wholeheartedly accepting algorithms and digital systems taking over some parts of our lives.
We can imagine that more and more of our other human capabilities will be discarded, will not be used anymore if our mindsets are geared to work as digital partners with machines. How can we initiate a pro-human approach? How can we set a human agenda making us aware of what is happening, enabling us to use the parts of digital innovation that are really helpful and renounce the ones driving us in a direction we do not want to follow, if only we had the knowledge of the repercussion it would have for us in the future. In a nutshell, this is what the ‘Hello, Robot’ exhibition is all about.
Obviously a lot is changing in the media right now, it is hard enough to know what it will look like as an industry in 5 years times. In my mind, we have barely started a discussion in the industry and with audiences, on where this new modernity will lead us. And it is important, as more automation, devices, algorithms, with the incremental innovation we get almost every day with upgrades, come into our lives. What options do we have? What influences are at play here? What about the money behind the digital monopolies being cemented as we speak, and where is this money leading to? with all the incremental innovation we get almost every day with upgrades.
There is quite a lot of research being conducted on developing a ‘superintelligence’: an artificial intelligence able to outsmart humans because it would be far superior to human intellectual capabilities, wherein lies countless possibilities. This is a point that will become more prominent in the next 30 to 40 years. Such a super intelligence is totally possible, is actually plausible.
So, what is needed for the media is to deal with the main aspects of digital modernity, with the possible outcomes of current digital developments and the implications these technologies would have on people’s lives, not only in 18 months time, but also in a few decades. The general public needs to understand the impact of robotics, artificial intelligence and also bio technology. This, I think the first big imperative we need to tackle.
Since digital programs will get increasingly smarter and prevalent, accounting for their self-learning capabilities, the media will make an impact as it makes the processes and stakes obvious to the audience. The question then will be whether we are able to discuss and collectively decide on a future we want to live in and plan for. There will hopefully be a scope for celebrating human qualities that cannot be calculated and catered for by digital programs, by algorithms. We should also want part of our lives to resonate.
As we live in a very fast world, we have more or less come to terms with this, we all organise or live with and by the terms of our smartphones. But I think we also need to reserve moments in our daly lives to get different content, different offerings.
We need to talk about ethics and technology, also about shaping the future, and what the options are. I think one task of the media is to dig, to expand on this topic and its potential repercussions – and I have to say that in the last year compared to 10 years ago, there has been tremendous progress as more publications are covering ethics in digital innovation now. But I doubt that this reflection is broader, gets further. Journalists have to become much more fundamental about this, to grasp the issue in its entirety, and for the general public to understand what is at stake.
A big part of what I believe and hope for is that the media has a big role to play in dealing with the most important dimensions of our new digital modernity. Because what is happening currently is a lot of attention turned towards brand new slices of innovation, disruptive innovation or other, but you rarely get across articles with the whole phenomenon of disruption put into perspective.
What do you think is sort of the biggest pitfall from this issue. If we don’t have this discussion about machines are we talking about what many scientists have pointed out which is that AI I might endanger the human species at some point. What are the dangers of this phenomenon?
Some scientists have been talking about this, a big wave of automation – more so than what exists now. Because the system works in such a way that as soon as one company starts to go down this of partial-to-full automation, when technology permits it, all the other companies in the same field have to do the same. This is the law of the market. When this happens, it will be big, there will be waves of innovation. Look at what happened to the service industry, banks or insurance companies, a large chunk of the workforce is being done away with, as they are becoming fully automated.
As long as we still have options, we need to be able to understand these issues and what they entail, so we can make the conscious choice to go all the way or into a completely different direction altogether.
Researchers working on artificial intelligence want to achieve progress and break new ground. They will not stop until they see the premise of a “superintelligence”, in the sense of artificial intelligence being much smarter than humans. There is nothing that will stop them from doing it.
We do not have a discussion broad enough about these issues currently but it is high time to do so. It has started, especially in the past year as I said, but it has to be much more systematic, this conversation has to have a much stronger presence everywhere.
Not to be hypocritical, as algorithms are becoming important in journalism, and they will develop more and more, but to be precise and determined. A line needs to be drawn, and it needs to be clear that there still is an important scope for “human journalism” because even the most advanced self-learning machines cannot achieve what it can.
True storytelling is always going to be taken care of by human hands and a human brain, in one way or another: the voice of a piece is too important, so is the emotional approach, which might condition the reach and audience engagement.
Machine-learning really means that information can be processed, and replicated on the same par, quality-wise. And while machines can mimic with a similar apparent value, they cannot yet be creative and come up with something brand new, a new narrative. Here lies the inherent, and hopefully enduring, aspect of journalism and storytelling. Now to protect this, we need to start a discussion.
Christoph Thun-Hohenstein assumed direction of the MAK — Austrian Museum of Applied Arts / Contemporary Art on 1 September 2011. He was director of the Austrian Cultural Forum New York from 1999 to 2007, after which he served as managing director of departure — the Creative Agency of the City of Vienna, until August 2011. Christoph Thun-Hohenstein has published on topics dealing above all with European integration and with contemporary culture and art, and has held numerous lectures on these topics. He has also curated exhibitions of contemporary art, and he regularly serves on selection juries.
From tech to ethics: Will AI be a threat to journalism? was originally published in Global Editors Network on Medium, where people are continuing the conversation by highlighting and responding to this story.
Mark Duffy has written the Copyranter blog for 11 years and is a freelancing copywriter with 25-plus years of experience. His hockey wrist shot is better than yours.
The perfect machine-learning AI copywriter is coming. Very soon.
Analog copywriters, you think I’m kidding? Then you should probably stop reading right here.
Goldman Sachs is already throwing sacks of money behind one of them — Persado. (Terrible name, humans. You should ask it to come up with 1,000 better names for itself. Or even better, hire another robot copywriter to do it.)
Here’s copy from its website: “What if there were a way to inspire action every time?” (What if I started a tree farm in Vermont?) More site copy: “Emotional and rational triggers quantified. … (Don’t mention “triggers” right now, please.) “Effective communication systemized. … Hard science behind soft skills.” (SOFT?)
You can bet other tech gurus (Apple, for sure) are working on better versions of Persado. And when you consider the abysmal state of advertising copywriting, robot copywriters can’t get up and online fast enough. Data is being optimized, the Customer Experience is being optimized, and it’s way past time for copywriting to be fully optimized.
All of you writers crying about your “craft” need to step off and face reality. Copywriters aren’t writers. We’re quipsters, pun-anators, phrase-ers, slicksters. Writers get paid to write; copywriters get paid to sell. Period. And no, “content” writers, you’re not real writers, either.
Some copywriters will still have job opportunities in the near-future Artificial Creative Departments. The politically and morally supple of you will survive. You’ll just have new titles, like Robot Copywriter Monitor, Robo-Curator or Account Executive.
The robots will have some weaknesses. They won’t wear “cool” frames or “cool” kicks. And they won’t hover over your shoulders, art directors, giving you that last amazing little layout change that really “brings everything together.”
Their strengths, though …
– You need two weeks to create a great campaign for a pitch? Robots will mold and hone 10 different campaigns tied to 10 different strategies while you’re taking a work dump. You wrote two versions of body copy for a launch ad? Robots will write a hundred versions of the same copy, and the only reason it would stop there is because it was told to.
– You wrote a Gold Lion-winning TV spot? The robot will feed from your ad and 100 other ads for that brand and spit out a Titanium Lion-winning follow-up spot, and it won’t then demand to be sent to Cannes fully comped every year thereafter like a complete douchebag.
– No groaner pun headlines (unless you ask for them).
– Robot copywriters won’t suck at grammar like you (and me).
– A robot will search the entire internet to find the exact right cause/issue/purpose to seamlessly attach to a brand, making the brand look damn near altruistic.
– “Immersive Storytelling” will sound so much better via the AI CW’s soothing voice.
– You can name your robot copywriter whatever you want: Bernbach, McCabe, Draper, Steinbeck, copyranter, Steadman (look him up, kids).
– Robots will pull all-nighters every night and not smell like ass in the morning.
The technological race for human (consumer) attention won’t be won by human copywriters. This “People Based Marketing” everybody’s crowing about will very soon be executed best by nonhuman copywriters. Every day, brands and marketers are demanding more and more speed and more and more ideas from their “creative” agencies. And creative agencies are gape-mouthed clueless on how to meet these demands. There is much money to be made closing this gap. And you can bet ethicless Silicon Valley will close it. Quick.
But you got a half-finished novel in your “personal” folder, don’t ya, flesh copywriter?
The post Code eats copy for breakfast: Human copywriters are doomed appeared first on Digiday.
Artificial intelligence and automation have recently crossed over into mainstream territory, carving an ever-growing space for themselves into newsrooms, allowing journalists to produce articles with highly shareable content and, putting the newsroom and its content at the forefront of social media platforms, in the quest for answers to the questions: How to bridge the gap between the print audience, the website audience and the highly-coveted eyes of Gen-Z and millennials on social media? We talked to Zohar Dayan, co-founder of Wibbitz to try and understand how AI and automation are helping publishers.
Wibbitz: We’ve seen in the US that in the past year or so there was a strong shift in the industry, a much more openness to more innovative technologies in the newsroom, especially with AI, it is something that was a taboo before with news companies that were naturally very traditional. Especially editors were very hesitant when it comes to artificial intelligence creating content. We have been seeing a huge shift, an evolution in the market with much more acceptance towards AI as a recommendation tool, AI that writes articles from data, infographics generated automatically using data feeds, and in our case, automatically produced video content.
All of this is a testament to business models, newsrooms and media companies’ that strive to be more efficient from a business point of view, and also in order to keep up with the high user demand for content and the diversified platforms that newsrooms are catering for.
We’ve been operating in several European countries, in addition to France, we work with Germany, Spain, Italy. We are opening an office in Paris as we really sensed that the market in France is becoming very innovative fast. We managed to start working with very significant companies, that you would not necessarily expect to be so innovative. For instance TF1 which is by all means a traditional TV broadcaster, that has recently been focusing on the digital space, evolving into the next stage, shaping up their future around digital content. It’s interesting, we have been seeing a lot of interest and traction from big companies, in addition to TF1, Le Parisien, Le Figaro. All were very interested in the technology and implemented it very quickly. It is very similar to the trend we saw rise in the US about a year and a half ago.
When we first started we were very focused on the automation aspect –which we still are – but there were a lot less editing tools and flexibility within the creation process so over time we worked very closely with all of our partners so we got a lot of time to get feedback from them, show them ideas of what we were thinking of adding to our roadmap, as a sort of a discovery process, or whenever we were making changes to the product. Through these exchanges we started to see how a publishing company perceived seeing automation: the perception from 2 years ago to now is very different, it is amazing to us.
Seeing that a publisher could be more open-minded to tools like ours. We have been able to build our product in a way that works well with what reporters are looking to get out of an automation tool: We see it as a combination of human and machine.
“We have been seeing a huge shift, an evolution in the market with much more acceptance towards AI.”
Two years ago, when we first introduced this idea to a publisher like Forbes, it wasn’t that they didn’t like the tool it was that they weren’t really open to giving it a chance, from what we saw in the situation. They did entertain the thought that it was perhaps useful, without fully believe or understand it themselves. And didn’t really try to integrate it into their workflow. We were met with strong resistance. There was often an element of fear when talking about automation and AI, especially with content creation. It happened with many industries though, whether it was for Uber with self-driving cars or product like ours.
Now, automation is better understood and far less threatening, as a process going through a tool which is no way a replacement for a person. There is a lot more acceptance now, which has enabled us to work with most of the publishing companies in the US and many in France, we are expanding in Europe and Australia.
Getting over the initial hump of fear and resistance was crucial, and as soon as we got the opportunity to show how the tool can be beneficial without reducing the quality of what newsrooms and publishers are doing, publishers got on board more easily.
It is very much like using the comparison of power steering: automation will allow newsrooms to do their job in a faster way. It has been a big change. And perceptions have changed over the last couple of years, how publishers and journalists are now viewing automation and AI: how receptive they are to it, they see it as useful to them.
We see more opportunities in the future for more complete automation, having the entire creation process automated, it still won’t be a replacement for people and human interaction, but as the demand for videos shows no sign of slowing down, incorporating automation as a tool means to ease and speed up processes.
One of the major uses we did not account for when we rolled out our product was the use of video on social media, which is not anything new, but we saw how a publisher could use our platform to adjust videos for every platform: automatically produce a square video or a vertical video. We have a set template to create automatically a 10-second vertical video, that looks very similar to what you see on SnapChat Discover or on Instagram Stories. This was a direct response to what we were seeing on the market. Publishers having a difficult time creating videos for all these different platforms in the right format for each one. That is extremely time consuming. New AI and automated products to help support publishers do just that.
There was often an element of fear when talking about automation and AI, especially with content creation
Definitely. We worked with TMZ and a few other publishers when we first rolled out the ‘Snippet’ vertical video. They started using it for Instagram, right when Instagram Stories started. It is something we suggested to them, as they weren’t creating content for it before.
This is a phenomenon we see a lot more of for mid-size publishers as they adopt automation, as they might not have thought to create content for Instagram and Snapchat. It has become a good opportunity for them to start publishing on these platforms and reach new audiences. An increasing number of publications follow this route, as they fully complete their transformation to digital.
It is sometimes difficult, but it is becoming less just because of the market, the need for video, the rise of the importance of video, for social media and general reporting, is everywhere. Publishers now fully understand that they need to do this, they need their content to be as flexible as possible and published across an array of platforms: their own website and their social channels, to engage their audience and convert new ones.
A lot of publications do not have an in-house video team understanding how to create these very short, ‘snackable’ type of videos. Although they might do long-form documentaries, or might be broadcasting companies, they do not necessarily understand the types of video content you can create through automation.
The work is getting more and more cut out for us to educate publishers on automation and AI because these topics are discussed, adopted and are becoming more and more mainstream, discussed and adopted.
Education on your product is crucial, but also a sound understanding of not only the market but the problematics publishers, editors and journalists are faced with every day. Also turning your prospects or clients into partners: with a true partnership going both ways, going beyond the regular client-supplier relationship, to be as close as possible to the demand, to understand rising issues and topics a newsroom meets so you can adjust your roadmap to meet the needs of the newsroom. This is the best way to carve a space for a startup. It is a hard and a long process, but eventually things start to shift.
AI will continue to develop and make breakthroughs in more and more industries. Specifically in news, AI will become an essential part of making a more productive and efficient newsroom, by speeding up processes. AI will empower journalists and editors to make better decisions and automate many of the time-consuming and labor intensive tasks that are not directly related to the content. AI and automation will enable storytellers to focus more on the craft of journalism and less on the mechanical work, leading to a more visual and rich news experience. Humans and machines working together will create a more streamlined workflow that answers the evolving content consumption patterns that we’re seeing today and will continue to see in the future.
Wibbitz is a text-to-video creation platform built for publishers. Its advanced text-to-video technology can automatically produce premium branded videos using text content in seconds. Wibbitz just opened a new office in Paris.
“Bots and automation are increasingly becoming a part of how journalism is produced and content is being consumed.” (TODAY.ng, 5 November 2016)
“Done properly, automated journalism has the potential to make all our jobs more interesting.” (The Irish Times, 23 March 2017)
“Automation was never about replacing jobs. It has always been about how we can best use the resources we have in a rapidly changing landscape and how we harness technology to run the best journalism company in the world.” (The Huffington Post, 30 January 2015)
In 2014, the Associated Press began automating some of its coverage of corporate earnings reports. Instead of having humans cover the basic finance stories, the AP, working with the firm Automated Insights, was able to use algorithms to speed up the process and free up human reporters to pursue more complex stories.
The AP estimates that the automated stories have freed up 20 percent of the time its journalists spent on earnings reports as well as allowed it to cover additional companies that it didn’t have the capacity to report on before. The newswire has since started automating some of its minor league baseball coverage, and it told me last year that it has plans to expand its usage of algorithms in the newsroom.
“Through automation, AP is providing customers with 12 times the corporate earnings stories as before (to over 3,700), including for a lot of very small companies that never received much attention,” Lisa Gibbs, AP’s global business editor, said in a report the AP released Wednesday.
The AP’s report — written by AP strategy and development manager Francesco Marconi and AP research fellow Alex Siegman, along with help from multiple AI systems — details some of the wire’s efforts toward automating its reporting while also sharing best practices and explaining the technology that’s involved, including machine learning, natural language processing, and more.
The report additionally identifies three particular areas of note that newsrooms should pay attention to as they consider introducing augmented journalism: unchecked algorithms, workflow disruption, and the widening gap in skills needed among human reporters to produce this type of reporting.
To highlight the challenges of using algorithmic journalism, the report constructed a situation where a team of reporters covering oil drilling and deforestation used AI to analyze satellite images to find areas impacted by drilling and deforestation:
Our hypothetical team begins by feeding their AI system a series of satellite images that they know represent deforestation via oil drilling, as well as a series of satellite images that they know do not represent deforestation via oil drilling. Using this training data, the machine should be able to view a novel satellite image and determine whether the land depicted is ultimately of any interest to the journalists.
The system reviews the training data and outputs a list of four locations the machine says are definitely representative of rapid deforestation caused by nearby drilling activity. But later, when the team actually visits each location in pursuit of the story, they find that the deforestation was not caused by drilling. In one case, there was a fire; in another, a timber company was responsible.
It appears that when reviewing the training data, the system taught itself to determine whether an area with rapid deforestation was near a mountainous area — because every image the journalists used as training data had mountains in the photos. Oil drilling wasn’t taken into consideration. Had the team known how their system was learning, they could have avoided such a mistake.
Algorithms are created by humans, and journalists need to be aware of their biases and cognizant that they can make mistakes. “We need to treat numbers with the same kind of care that we would treat facts in a story,” Dan Keyserling, head of communications at Jigsaw, the technology incubator within Google’s parent company Alphabet. “They need to be checked, they need to be qualified and their context needs to be understood.”
That means the automation systems need maintenance and upkeep, which could change the workflow and processes of editors within the newsroom:
Story templates were built for the automated output by experienced AP editors. Special data feeds were designed by a third-party provider to feed the templates. Continuing maintenance is required on these components as basic company information changes quarter to quarter, and although the stories are generated and sent directly out on the AP wires without human intervention, the journalists have to watch for any errors and correct them.
Automation also changes the type of work journalists do. For instance, when it comes to the AP’s corporate earnings stories, Gibbs, the global business editor, explained that reporters are now pursuing different types of reporting.
“With the freed-up time, AP journalists are able to engage with more user-generated content, develop multimedia reports, pursue investigative work and focus on more complex stories,” Gibbs said.
Still, in order to use this type of automated reporting, newsrooms must employ data scientists, technologists, and others who are able to implement and maintain the algorithms. “We’ve put a lot of effort into putting more journalists who have programming skills in the newsrooms,” said New York Times chief technical officer Nick Rockwell.
The report emphasizes that communication and collaboration are critical, especially while keeping a news organization’s journalistic mission front and center. The report outlined how it views the role data scientists play:
Data scientists are individuals with the technical capabilities to implement the artificial intelligence systems necessary to augment journalism. They are principally scientists, but they have an understanding as to what makes a good story and what makes good journalism, and they know how to communicate well with journalists.
“It’s important to bring science into newsrooms because the standards of good science — transparency and reproducibility — fit right at home in journalism,” said Larry Fenn, a trained mathematician working as a journalist in AP’s data team.
The full AP study is available here.
The availability of data feeds, the demand for news on digital devices, and advances in algorithms are helping to make automated journalism more prevalent. This article extends the literature on the subject by analysing professional journalists’ experiences with, and opinions about, the technology. Uniquely, the participants were drawn from a range of news organizations—including the BBC, CNN, and Thomson Reuters—and had first-hand experience working with robo-writing software provided by one of the leading technology suppliers. The results reveal journalists’ judgements on the limitations of automation, including the nature of its sources and the sensitivity of its “nose for news”. Nonetheless, journalists believe that automated journalism will become more common, increasing the depth, breadth, specificity, and immediacy of information available. While some news organizations and consumers may benefit, such changes raise ethical and societal issues and, counter-intuitively perhaps, may increase the need for skills—news judgement, curiosity, and scepticism—that human journalists embody.
This paper presents the results of an evaluation of three different types of geographical news classification methods: (1) simple keyword matching, a popular method in media and communications research; (2) geographical information extraction systems equipped with named-entity recognition and place name disambiguation mechanisms (Open Calais and Geoparser.io); and (3) a semi-supervised machine learning classifier developed by the author (Newsmap). Newsmap substitutes manual coding of news stories with dictionary-based labelling in the creation of large training sets to extract large numbers of geographical words without human involvement and it also identifies multi-word names to reduce the ambiguity of the geographical traits fully automatically. The evaluation of classification accuracy of the three types of methods against 5000 human-coded news summaries reveals that Newsmap outperforms the geographical information extraction systems in overall accuracy, while the simple keyword matching suffers from ambiguity of place names in countries with ambiguous place names.