Moving into 2019 D/SRUPTION highlights the trends your organisations should be tracking
Last year, we outlined the key trends that would shape the year through disruptive technologies and business models. While they continue to influence the decisions that businesses make, in 2019 there are a host of new considerations to be aware of.
Web 3.0 refers to the next iteration of the internet. It will use technology to capitalise on interactivity, propelling the internet away from keywords and towards smarter search. Unlike current online experiences, Web 3.0 will use Artificial Intelligence to make search smarter. Currently, search terms bring up the most popular results, but this doesn’t necessarily meet the needs of the user. By applying context, Web 3.0 will be able to filter through irrelevant information to deliver a personal, specific set of results.
The Internet of Things also plays an important role in enabling Web 3.0. The next generation of ubiquitous, always on search won’t be possible without mass connectivity, and this is what 2019 is hoped to deliver with 5G and improved data storage solutions. In this, Web 3.0 is tied to another of our 2019 trends: spatial computing. Omnipresent search will require different interfaces, and different methods of interaction such as voice and potentially even gestures. The prominent open data movement is another enabler for Web 3.0. Last year, Figshare reported that 64 per cent of respondents from the research community had made their data openly available.
Web 3.0 has become all the more relevant in recent months due to the efforts of the blockchain community. Ethereum’s decentralised browser Opera was launched on the 13th of December, and in the first quarter of this year the Hydro dApp Store will be released to the public. By supporting dApps, these platforms will edge closer to a decentralised web.
Simulation and digital twins
In 2018, digital twins emerged as a useful way to digitally represent physical assets. The concept originated at NASA during the early years of space exploration to keep track of complicated machinery. Using digital mirroring technology, NASA could replicate real world systems and equipment.
Today, digital twins are helpful tools for data visualisation that can predict potential problems before they occur. They provide self updating simulation models, gathering data via sensors and analysing it against a variety of other sources. The aim of building a digital twin is to gain a deeper understanding of how a process, product or service could be improved. The rise of digital twins has been fuelled by the explosion of the Internet of Things and Industry 4.0. As more things become connected, it could eventually be possible to create digital twins for all of our physical devices.
“Simulation of real-world environments has become a critical requirement for models operating at the extreme end of complexity, in training self-driving cars for example, or anomaly detection using the digital twins of IoT-enabled physical assets,” explains Tariq Khatri, cofounder of Machinable.
This year, the complexity of digital business will necessitate the adoption of simulation techniques like digital twins to understand why products function in the way that they do, and how to make them work optimally. As the expectations and complexity of consumer demands grow, monitoring systems closely will enable supply chains to meet demand. Gartner has also identified digital twins as a top technology trend for 2019, and IDC forecasts that 30 per cent of Global 2000 companies will use the technology by 2020. A new development to watch is the birth of the digital twin of an organisation (DTO). Instead of monitoring a collection of singular products, it will become possible to track the health of the entire enterprise.
The Market of One
2018 witnessed the phenomenon of mass personalisation. Rather than targeting markets as a whole, organisations realised that true value lay in the market of one. How better to meet the needs of customers than to treat them as individuals?
The use of personal data to answer the specific requirements of specific customers enables what D/SRUPTION calls, ‘the market of one’. Personalisation at an individual level has become a vital capability in healthcare, retail, finance, entertainment, and any other customer facing business. Data has made it possible to communicate directly with a consumer with a view to creating the most rewarding B2C experiences. In order to accommodate the abandonment of mass market estimations, supply chains have been forced to change. As digital natives grow up, and their children are born into a rapidly digitalised world, as-a-service and self service business models will thrive. Coupled with geotargeting and behavioural data analysis, companies now have the tools to gather information from individuals and turn it into focused insights.
Acknowledging the market of one is the first step. However, building the systems that can take advantage of it is a challenge that legacy firms will grapple with. Now that consumers are accustomed to individual, personal experiences, they have no choice. The market of one also highlights the debate over data, and is likely to encourage new data regulations that encourage organisations to meet stringent data standards.
“Organisations need to move away from targeting markets by segment and look to focus on the individual in the market of one,” says Rob Prevett, CEO of D/SRUPTION. “Developing the capabilities to measure specifically what an individual consumer wants, and being in a position to link processes and resources to provide it will be essential to business success.”
Edge computing involves a shake up to the traditional topology of a computer network. In edge computing, information processing capabilities are placed close to the source of that information, in an effort to reduce delays in the network.
It is in the current age of the Internet of Things that edge computing finds its main function. It can facilitate better connectivity to remote devices, thereby powering distributed IoT networks. As we progress through 2019, the use and the functionality of edge devices will increase. We can expect not only to see the appearance of edge devices in greater numbers, but also their creation with advanced sensing, AI, storage, computing and analytical power. One area where this will have particular impact is data processing, as it gives companies greater insight into their consumers and entirely new ways of conducting business. In healthcare, for example, as D/SRUPTION’s future of health expert Tina Woods says, “edge computing will handle the data explosion from the rise of genomics.”
The voice economy
Voice technology is redefining the way we interact with machines. In business, this has led to the creation of the voice economy – an entirely new ecosystem of marketing, branding and consumer engagement with the voice – and it’s easy to see why. Communicating via voice is entirely natural for consumers. When voice powered devices are there, ready and waiting in the background, there’s no need to rely on intrusive screens to retrieve information or complete a task. This makes it easier for consumers to engage with products and services in a seamless and enjoyable way. That is, when the technology works as it is supposed to…
While the popularity of personal voice assistants such as Amazon Alexa and Google Home continues to rise in UK households, the future growth of the voice economy in 2019 and beyond will depend upon the sophistication of this technology. In blunt terms, we’re all going to get tired of asking Alexa questions if she can’t understand them, and won’t learn from her mistakes…
Luckily, over the next few years we can expect continued progress in natural language processing (NLP), an application of AI that is crucial to a machine’s ability to process human speech. In 2018, for example, Google’s Duplex assistant sucessfully made two phone calls to human operators to book a hair appointment and complete a restaurant reservation, without them realising they were actually talking to a machine. This kind of activity demonstrates the exciting future of voice assistants – and the voice economy – if the technology can fulfil its potential. Watch this space for more developments in 2019.
In 2019, we’ll be keeping an eye on strategic automation. If you’re not familiar with the term, it’s a nuanced take on intelligent automation – the combination of automation and artificial intelligence to automate business processes and drive efficiency in organisations. So why the focus on ‘strategic’ automation rather than its intelligent counterpart? Well, recent research shows that while the automation of business tasks is a growing trend, many companies are focussing on its short term benefits, rather than seeking long term, strategic initiatives.
According to Andrew Burgess, D/SRUPTION’s expert in AI and automation, instead of finding individual areas of an organisation that could benefit from automation, strategic automation rather considers the business at a holistic level. Businesses can unify the workings of individual technologies across all of their operations to truly reap the benefits of strategic automation, and this is what we expect to see more of in 2019.
Although automation has always been a watchword for employees who fear that machines might put them out of their jobs, the reality of strategic automation is that it relieves humans from mundane, repetitive tasks. With the promises of automation including higher productivity levels, the ability of employees to focus on more meaningful work and a happier workforce, many organisations will consider their automation options this year.
Ubiquitous AI refers to the presence of artificial intelligence in all of our machines, applications and processes. As the applications of AI become more powerful, more varied, and easier to use, the world is gradually moving towards this state of affairs. For one thing, it is now much easier for programme developers to integrate AI capabilities into their applications without having to create or manipulate the AI themselves.
This describes the growing movement of ‘democratised AI’, a publicly stated aim of all the major technology companies including Microsoft, Google, Amazon and IBM. As Ronald Ashri, co founder of GreenShoot Labs notes, “when they speak of democratised AI, these companies mean that they will make it increasingly easy for any software developer to access powerful machine learning algorithms through cloud based technologies. As a result we will see many more AI technologies used within software and AI becoming part of the standard toolset of developers.” This will fundamentally open up the reach of artificial intelligence beyond a highly skilled data scientist community in an IT department, and into the rest of an organisation, including citizen data scientists and developers.
As we move forward into an age of ubiquitous AI, it’s also important to consider the use of artificial intelligence in a responsible way. The debate around the ethical use of AI continues to raise a host of seemingly unsolvable questions, yet – with AI finding its way into more and more areas of our lives, these conversations are becoming more important than ever.
Spatial computing blends technology into the real world using augmented, mixed, and virtual reality. AR puts a digital layer over the real world, MR places interactive digital objects into the real world, and VR puts the user in another world entirely. Gradually, alternate and mixed reality technology has moved away from gaming and into other industries, providing training, data visualisation, and enabling collaborative work.
Spatial computing promises a new relationship between humans and digital content. Instead of interacting with interfaces via typed command or touch, users control spatial computing interfaces with their eyes, gestures, and voice. The technology is already in rudimentary use in semi-autonomous cars, robots, drones, and MR headsets. This year, the likes of Google, Facebook and Microsoft all took forward steps in building their mixed reality capabilities. So why now? Developers now know more about the perimeters of spatial computing and the possibility of computationally engineered matter. Never before has computing power been cheaper or more accessible, allowing machines to leave the confines of single screen interfaces and occupy the real world.
Don’t expect to see universal, seamless integration between the real world and the digital sphere in 2019 – but do look out for big tech companies preparing for the next big shift in human to machine communications. Apple’s latest move, for example, was to hire a former HoloLens designer. And, just maybe, it looks like Magic Leap might finally reveal what it has been quietly working on for so long. In December, the company announced a partnership with news streaming service Cheddar to bring current affairs to Magic Leap One devices. Commercial release is yet to be divulged, but the virtual building blocks are in place for mass market mixed reality.
Last on our list of disruptive trends for 2019 is quantum computing. Although the technology is still in its early stages, it provides unprecedented scope for the way we process information. The unparalleled levels of computing power offered by quantum have the potential to enhance our devices beyond recognition. In a quantum computer world, today’s network latency issues would become a thing of the past, and computers would become more varied in capacity and power. We can’t expect to see quantum in commercial computers any time soon, but an understanding of this technology will serve businesses well in the coming years, particularly when they consider how to deal with large amounts of data. This earns quantum computing a well deserved place on our list.
Which trends will be most significant to your business in 2019? Take our survey here.
You can also stay up to date with these trends in our free weekly newsletter.