TECHNOLOGY FUTURE

Teaser_image

The Future Of Technology: What Are SMBs' Worried About?

With cyber-security attacks including WannaCry, NotPetya, and Dead Rabbit, affecting businesses across the globe, it’s clear that 2017 has experienced its fair share of high-profile data breaches.

It’s reasonable then, for SMBs to feel concerned about protecting their data against cyber-threats, but what else are they worried about when looking to the future?

From BYOD and cloud computing to artificial intelligence, SMBs are keen to utilize these new technologies in order to grow their businesses. But with new technology, comes new threats.

So, are SMBs most concerned about data breaches, rogue artificial intelligence or the use of genetic screening for employees?

To discover what SMB owners and employees are worried about in terms of future technology, we surveyed more than 100 small businesses about cyber security, privacy and artificial intelligence, to reveal the answer.

This research, presented in the form of an infographic, covers a range of topics and delivers insightful results as respondents reveal:

• How concerned they are that companies will use AI against their employees

• If they believe that managers will be replaced with programmed AI

• Their thoughts on changes to the access and control of personal data

The main feature of this autonomous economy is not merely that it deepens the physical one. It’s that it is steadily providing an external intelligence in business—one not housed internally in human workers but externally in the virtual economy’s algorithms and machines. Business and engineering and financial processes can now draw on huge “libraries” of intelligent functions and these greatly boost their activities—and bit by bit render human activities obsolete.

This is causing the economy to enter a new and different era. The economy has arrived at a point where it produces enough in principle for everyone, but where the means of access to these services and products, jobs, is steadily tightening. So this new period we are entering is not so much about production anymore—how much is produced; it is about distribution—how people get a share in what is produced. Everything from trade policies to government projects to commercial regulations will in the future be evaluated by distribution. Politics will change, free-market beliefs will change, social structures will change.We are still at the start of this shift, but it will be deep and will unfold indefinitely in the future.

How did we get to where we are now? About every 20 years or so the digital revolution morphs and brings us something qualitatively different. Each morphing issues from a set of particular new technologies, and each causes characteristic changes in the economy.

The first morphing, in the 1970s and ’80s, brought us integrated circuits—tiny processors and memory on microchips that miniaturized and greatly speeded calculation. Engineers could use computer-aided design programs, managers could track inventories in real time, and geologists could discern strata and calculate the chance of oil. The economy for the first time had serious computational assistance. Modern fast personal computation had arrived.

The second morphing, in the 1990s and 2000s, brought us the connection of digital processes. Computers got linked together into local and global networks via telephonic or fiber-optic or satellite transmission. The Internet became a commercial entity, web services emerged, and the cloud provided shared computing resources. Everything suddenly was in conversation with everything else.

It’s here that the virtual economy of interconnected machines, software, and processes emerges, where physical actions now could be executed digitally. And it’s also here that the age-old importance of geographical locality fades. An architecture firm in Seattle could concern itself with the overall design of a new high-rise and have less expensive workers in Budapest take care of the detailing, in an interactive way. Retailers in the United States could monitor manufacturers in China and track suppliers in real time. Offshoring took off, production concentrated where it was cheapest—Mexico, Ireland, China—and previously thriving home local economies began to wither. Modern globalization had arrived and it was very much the result of connecting computers.

The third morphing—the one we are in now—began roughly in the 2010s, and it has brought us something that at first looks insignificant: cheap and ubiquitous sensors. We have radar and lidar sensors, gyroscopic sensors, magnetic sensors, blood-chemistry sensors, pressure, temperature, flow, and moisture sensors, by the dozens and hundreds all meshed together into wireless networks to inform us of the presence of objects or chemicals, or of a system’s current status or position, or changes in its external conditions.

These sensors brought us data—oceans of data—and all that data invited us to make sense of it. If we could collect images of humans, we could use these to recognize their faces. If we could “see” objects such as roads and pedestrians, we could use this to automatically drive cars.

As a result, in the last ten years or more, what became prominent was the development of methods, intelligent algorithms, for recognizing things and doing something with the result. And so we got computer vision, the ability for machines to recognize objects; and we got natural-language processing, the ability to talk to a computer as we would to another human being. We got digital language translation, face recognition, voice recognition, inductive inference, and digital assistants.

What came as a surprise was that these intelligent algorithms were not designed from symbolic logic, with rules and grammar and getting all the exceptions correct. Instead they were put together by using masses of data to form associations: This complicated pixel pattern means “cat,” that one means “face”—Jennifer Aniston’s face. This set of Jeopardy! quiz words points to “Julius Caesar,” that one points to “Andrew Jackson.” This silent sequence of moving lips means these particular spoken words. Intelligent algorithms are not genius deductions, they are associations made possible by clever statistical methods using masses of data.

Of course the clever statistical techniques took huge amounts of engineering and several years to get right. They were domain specific, an algorithm that could lip read could not recognize faces. And they worked in business too: this customer profile means “issue a $1.2 million mortgage”; that one means “don’t act.”

Computers, and this was the second surprise, could suddenly do what we thought only humans could do—association.

It would be easy to see associative intelligence as just another improvement in digital technology, and some economists do. But I believe it’s more than that. “Intelligence” in this context doesn’t mean conscious thought or deductive reasoning or “understanding.” It means the ability to make appropriate associations, or in an action domain to sense a situation and act appropriately. This fits with biological basics, where intelligence is about recognizing and sensing and using this to act appropriately. A jellyfish uses a network of chemical sensors to detect edible material drifting near it, and these trigger a network of motor neurons to cause the jellyfish to close automatically around the material for digestion.

Thus when intelligent algorithms help a fighter jet avoid a midair collision, they are sensing the situation, computing possible responses, selecting one, and taking appropriate avoidance action.

There doesn’t need to be a controller at the center of such intelligence; appropriate action can emerge as the property of the whole system. Driverless traffic when it arrives will have autonomous cars traveling on special lanes, in conversation with each other, with special road markers, and with signaling lights. These in turn will be in conversation with approaching traffic and with the needs of other parts of the traffic system. Intelligence here—appropriate collective action—emerges from the ongoing conversation of all these items. This sort of intelligence is self-organizing, conversational, ever-adjusting, and dynamic. It is also largely autonomous. These conversations and their outcomes will take place with little or no human awareness or intervention.

The interesting thing here isn’t the form intelligence takes. It’s that intelligence is no longer housed internally in the brains of human workers but has moved outward into the virtual economy, into the conversation among intelligent algorithms. It has become external. The physical economy demands or queries; the virtual economy checks and converses and computes externally and then reports back to the physical economy—which then responds appropriately. The virtual economy is not just an Internet of Things, it is a source of intelligent action—intelligence external to human workers.

This shift from internal to external intelligence is important. When the printing revolution arrived in the 15th and 16th centuries it took information housed internally in manuscripts in monasteries and made it available publicly. Information suddenly became external: it ceased to be the property of the church and now could be accessed, pondered, shared, and built upon by lay readers, singly or in unison. The result was an explosion of knowledge, of past texts, theological ideas, and astronomical theories. Scholars agree these greatly accelerated the Renaissance, the Reformation, and the coming of science. Printing, argues commentator Douglas Robertson, created our modern world.

Now we have a second shift from internal to external, that of intelligence, and because intelligence is not just information but something more powerful—the use of information—there’s no reason to think this shift will be less powerful than the first one. We don’t yet know its consequences, but there is no upper limit to intelligence and thus to the new structures it will bring in the future.

To come back to our current time, how is this externalization of human thinking and judgment changing business? And what new opportunities is it bringing?

Some companies can apply the new intelligence capabilities like face recognition or voice verification to automate current products, services, and value chains. And there is plenty of that.

More radical change comes when companies stitch together pieces of external intelligence and create new business models with them. Recently I visited a fintech (financial technology) company in China, which had developed a phone app for borrowing money on the fly while shopping. The app senses your voice and passes it to online algorithms for identity recognition; other algorithms fan out and query your bank accounts, credit history, and social-media profile; further intelligent algorithms weigh all these and a suitable credit offer appears on your phone. All within seconds. This isn’t quite the adoption of external intelligence; it is the combining of sense-making algorithms, data-lookup algorithms, and natural-language algorithms to fulfill a task once done by humans.

In doing this, businesses can reach into and use a “library” or toolbox of already-created virtual structures as Lego pieces to build new organizational models. One such structure is the blockchain, a digital system for executing and recording financial transactions; another is Bitcoin, a shared digital international currency for trading. These are not software or automated functions or smart machinery. Think of them as externally available building blocks constructed from the basic elements of intelligent algorithms and data.

The result, whether in retail banking, transport, healthcare, or the military, is that industries aren’t just becoming automated with machines replacing humans. They are using the new intelligent building blocks to re-architect the way they do things. In doing so, they will cease to exist in their current form.

Some large tech companies can directly create externally intelligent systems such as autonomous air-traffic control or advanced medical diagnostics. Others can build proprietary databases and extract intelligent behavior from them. But the advantages of being large or early in the market are limited. The components of external intelligence can’t easily be owned, they tend to slide into the public domain. And data can’t easily be owned either, it can be garnered from nonproprietary sources.

So we will see both large tech companies and shared, free, autonomous resources in the future. And if past technology revolutions are indicative, we will see entirely new industries spring up we hadn’t even thought of.

Of course there’s a much-discussed downside to all this. The autonomous economy is steadily digesting the physical economy and the jobs it provides. It’s now a commonplace that we no longer have travel agents or typists or paralegals in anything like the numbers before; even high-end skilled jobs such as radiologists are being replaced by algorithms that can often do the job better.

Economists don’t disagree about jobs vanishing, they argue over whether these will be replaced by new jobs. Economic history tells us they will. The automobile may have wiped out blacksmiths, but it created new jobs in car manufacturing and highway construction. Freed labor resources, history tells us, always find a replacement outlet and the digital economy will not be different.

When automotive transport arrived, a whole group of workers—horses—were displaced, never to be employed again. They lost their jobs and vanished from the economy.

Offshoring in the last few decades has eaten up physical jobs and whole industries, jobs that were not replaced. The current transfer of jobs from the physical to the virtual economy is a different sort of offshoring, not to a foreign country but to a virtual one. If we follow recent history we can’t assume these jobs will be replaced either.

In actual fact, many displaced people become unemployed; others are forced into low-paying or part-time jobs, or into work in the gig economy. Technological unemployment has many forms.

The term “technological unemployment” is from John Maynard Keynes’s 1930 lecture, “Economic possibilities for our grandchildren,” where he predicted that in the future, around 2030, the production problem would be solved and there would be enough for everyone, but machines (robots, he thought) would cause “technological unemployment.” There would be plenty to go around, but the means of getting a share in it, jobs, might be scarce.

We are not quite at 2030, but we have reached the Keynes point, where indeed enough is produced by the economy, both physical and virtual, for all of us. (If total US household income of $8.495 trillion were shared by America’s 116 million households, each would earn $73,000, enough for a decent middle-class life.) And we have reached a point where technological unemployment is becoming a reality.

The problem in this new phase we’ve entered is not quite jobs, it is access to what’s produced. Jobs have been the main means of access for only 200 or 300 years. Before that, farm labor, small craft workshops, voluntary piecework, or inherited wealth provided access. Now access needs to change again.

However this happens, we have entered a different phase for the economy, a new era where production matters less and what matters more is access to that production: distribution, in other words—who gets what and how they get it. We have entered the distributive era.

A new era brings new rules and realities, so what will be the economic and social realities of this new era where distribution is paramount?

The criteria for assessing policies will change. The old production-based economy prized anything that helped economic growth. In the distributive economy, where jobs or access to goods are the overwhelming criteria, economic growth looks desirable as long as it creates jobs. Already, unpopular activities such as fracking are justified on this criterion.

The criteria for measuring the economy will also change. GDP and productivity apply best to the physical economy and do not count virtual advances properly.

Free-market philosophy will be more difficult to support in the new atmosphere. It is based on the popular notion that unregulated market behavior leads to economic growth. I’ve some sympathy with this. Actual economic theory has two propositions. If a market—the airline market, say—is made free and operates according to a host of small-print economic conditions, it will operate so that no resources are wasted. That’s efficiency. Second, there will be winners and losers, so if we want to make everyonebetter off, the winners (big-hub airlines, in this case) need to compensate the losers: small airlines and people who live in remote places. That’s distribution, and overall everyone is better off.

In practice, whether with international trade agreements or deregulation or freeing up markets, the efficiency part holds at best sort of; often unregulated behavior leads to concentration as companies that get ahead lock in their advantage. And in practice, in the United States and Britain, those who lose have rarely been compensated. In earlier times they could find different jobs, but now that has become problematic. In the distributive era free-market efficiency will no longer be justifiable if it creates whole classes of people who lose.

The new era will not be an economic one but a political one. We’ve seen the harsh beginnings of this in the United States and Europe. Workers who have steadily lost access to the economy as digital processes replace them have a sense of things falling apart, and a quiet anger about immigration, inequality, and arrogant elites.

I’d like to think the political upheaval is temporary, but there’s a fundamental reason it’s not. Production, the pursuit of more goods, is an economic and engineering problem; distribution, ensuring that people have access to what’s produced, is a political problem. So until we’ve resolved access we’re in for a lengthy period of experimentation, with revamped political ideas and populist parties promising better access to the economy.

This doesn’t mean that old-fashioned socialism will swing into fashion. When things settle I’d expect new political parties that offer some version of a Scandinavian solution: capitalist-guided production and government-guided attention to who gets what. Europe will find this path easier because a loose socialism is part of its tradition. The United States will find it more difficult; it has never prized distribution over efficiency.

Whether we manage a reasonable path forward in this new distributive era depends on how access to the economy’s output will be provided. One advantage is that virtual services are essentially free. Email costs next to nothing. What we will need is access to the remaining physical goods and personal services that aren’t digitized.

For this we will still have jobs, especially those like kindergarten teaching or social work that require human empathy. But jobs will be fewer, and work weeks shorter, and many jobs will be shared. We will almost certainly have a basic income. And we will see a great increase in paid voluntary activities like looking after the elderly or mentoring young people.

We will also need to settle a number of social questions: How will we find meaning in a society where jobs, a huge source of meaning, are scarce? How will we deal with privacy in a society where authorities and corporations can mine into our lives and finances, recognize our faces wherever we go, or track our political beliefs? And do we really want external intelligence “helping” us at every turn: learning how we think, adjusting to our actions, chauffeuring our cars, correcting us, and maybe even “nurturing” us? This ought to be fine, but it’s like having an army of autonomous Jeeveses who altogether know too much about us, who can anticipate our needs in advance and fulfill them, and whom we become dependent upon.

All these challenges will require adjustments. But we can take consolation that we have been in such a place before. In 1850s Britain, the industrial revolution brought massive increases in production, but these were accompanied by unspeakable social conditions, rightly called Dickensian. Children were working 12-hour shifts, people were huddled into tenements, tuberculosis was rife, and labor laws were scarce. In due time safety laws were passed, children and workers were protected, proper housing was put up, sanitation became available, and a middle class emerged. We did adjust, though it took 30 to 50 years—or arguably a century or more. The changes didn’t issue directly from the governments of the time, they came from people, from the ideas of social reformers, doctors and nurses, lawyers and suffragists, and indignant politicians. Our new era won’t be different in this. The needed adjustments will be large and will take decades. But we will make them, we always do.

Industry 4.0 is the current trend of automation and data exchange in manufacturing technologies. It includes cyber-physical systems, the Internet of things, and cloud computing.

Industry 4.0 creates what has been called a smart factory. Within the modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralized decisions. Over the Internet of Things, cyber-physical systems communicate and cooperate with each other and with humans in real time, and via the Internet of Services, both internal and cross-organizational services are offered and used by participants of the value chain.

Industry 4.0 refers to the convergence and application of nine technologies: advanced robotics; big data and analytics; cloud computing; the industrial internet; horizontal and vertical system integration; simulation; augmented reality; additive manufacturing; and cybersecurity. Companies unlock the full potential of Industry 4.0 by coordinating the implementation of those technologies—for example, by deploying sensors to collect data within a secure cloud environment and applying advanced analytics to gain insights.

In this way, a manufacturer can create an integrated, automated, and optimized production flow across the supply chain, as well as synthesize communications between itself and its suppliers and customers. This end-to-end integration will reduce waiting time and work-in-progress inventory and, ultimately, may even make it possible for manufacturers to offer mass customization at the same price as mass production.

As adoption proceeds, the labor cost advantages of traditional low-cost locations will shrink, motivating manufacturers to bring previously offshored jobs back home. Manufacturers will also benefit from higher demand resulting from the growth of existing markets and the introduction of new products and services.

The profile of the workforce will also change. The critical Industry 4.0 jobs—such as for data managers and scientists, software developers, and analytics experts—require skills that differ fundamentally from those that most industrial workers possess today. Manufacturers will need to take steps to close the skills gap, such as retraining the workforce and tapping the pool of digital talent. Moreover, manufacturers will need to create new jobs to meet the higher demand.

The race is on to adopt Industry 4.0. Companies in the US and Germany have implemented Industry 4.0 at approximately the same pace. The real value is achieved when manufacturers maximize the impact of these advances by combining them in a comprehensive program. Manufacturers need to gain a deeper understanding of how they can apply Industry 4.0 and accelerate the pace of adoption. The winners will approach the race to Industry 4.0 as a series of sprints but manage their program as a marathon.

As costs fall for infrastructural technologies such as computers and the internet, the technologies would—like railroads, electricity, and telephones—become widely available commodities. Once a technology is ubiquitous and available to all—neither scarce nor proprietary—it no longer confers a lasting competitive advantage.

Factors such as commoditization can erode a product’s advantage over time, but companies can create lasting advantage from widely available technology. The powerful effects of technology are visible in economic metrics. Technology matters to a company’s bottom line and it has impact. The use of proprietary metrics such as technology intensity to make the most of technology lies at the heart of creating what we call technology advantage.

Given the rapid emergence of disruptive products and business models and the transformative power of digital technologies on business and society, executives must become masters of the global “technology economy,” capable of detecting the economic impact of rapid technological change and able to respond with speed and foresight. In these articles, we explore the new metrics and consider the new ways that companies need to think in order to navigate the technology economy and approach the many investment decisions in which technology plays a role.

Technology infuses even the measurement of the market economy. The composition of indices such as the Dow Jones Industrial Average (DJIA) and the S&P 500 has changed. Industrial companies are being replaced by tech powerhouses like Apple, Google, and Amazon, whose stocks are valued much higher than those of many long-time industrial members. Apple, with its high market capitalization, accounts for such a large share of the DJIA, for example, that a hiccup in its quarterly earnings moves the entire index. Just 20 or 30 years ago, the performance of Caterpillar or GM (the latter no longer part of the DJIA) could have similarly shaken up the market.

Furthermore, technology permeates companies. Worldwide corporate IT spending—an important barometer of the technology economy that focuses on corporate spending for hardware, software, data centers, networks, and staff, whether “internal” IT or outsourced services—is nearly $6 trillion per year. This amount is what it would cost to give a $500 smartphone and $350 tablet to each of the 7.1 billion people on Earth. If the global technology economy were a country and that spending its GDP, it would rank between the economies of China and Japan and would be more than twice the size of the UK economy. Corporate technology spending grew by a factor of almost 20 from 1980 through 2015, while global GDP barely tripled.

Of course, the $6 trillion figure for corporate IT spending does not include all the money companies spend on technology. It does not account for spending on the sensors, processors, and other technologies embedded in everyday products, including cars, aircraft engines, appliances, and the smart grid; nor does it include spending on robotics, process automation, and mobile technologies. If we include such investments, our technology-spending estimate increases dramatically.

IT spending data is a proxy for the technology economy. This measure of technology spending, which highlights the complexities of looking at technology through an economic lens, is a critical element of a company’s overall digital transformation. Using technology intensity, we can shine a spotlight that reveals the economic impact of this massive amount of technology spending.

In the past, business leaders tended to examine the two metrics in isolation. But that doesn’t give leaders the whole picture. Revenues don’t automatically rise when companies spend more on technology. And it’s not necessarily a bad thing if a company’s technology spending is high relative to operating expenses. However, if leaders compare technology spending simultaneously with revenues and operating expenses, as technology intensity does, several interesting relationships emerge.

Across a range of industries, companies with high technology intensity have high gross margins. For instance, in the insurance sector, top-performing companies enjoy gross margins that are more than three times the margins of average performers and technology intensities that are more than 50% higher. In banking and financial services, companies with the highest gross margins have technology intensities and margins that are roughly double those of average performers. This industry has seen extremely high levels of automation over the past five years—including technology systems that streamline processes, and advances in artificial intelligence that allow robots to answer clients’ questions and, eventually, to execute trades. Michael Rogers, the president of State Street, estimated in Bloomberg Markets that by 2020, automation will have replaced one in five of the company’s workers. Within a decade, 1.8 million employees in US and European banks could be out of jobs.

We see not just a connection between technology intensity and gross margins but also a strong correlation. That is, technology intensity and gross margins tend to rise and decline together. This effect was seen before and after the recent world economic crash. In the run-up to the Great Recession that started in 2007, companies were investing more and more heavily in technology relative to revenues and operating expenses, and gross margins were rising. That trend accelerated through 2008 and until 2009, when companies belatedly realized the magnitude of what had happened and began to cut technology investment dramatically. After that, technology intensity dropped precipitously along with gross margins.

Along with the technology intensity metric, companies can add other measures to their management dashboard, such as income per dollar of technology spending. We define income as revenues minus operating expenses.

For example, the energy industry produces the highest income per dollar of technology spending ($24.24). At the other end of the spectrum, the software publishing and internet services industry produces the lowest ($0.98). In both total technology spending and the technology spending required just to “keep the lights on,” we saw a similar rise until 2008, followed by a plunge in income per dollar of technology spending during the market collapse. Afterward, we saw what might be called the failure of recovery as a result of sluggish growth. Income per dollar of technology spending in 2014 and 2015 has basically flatlined, reaching only precrash levels.

Another measure that companies can use to connect the dots between the business and the IT function is the IT cost of goods. For example, in the US, the IT cost per day of a hotel bed is $2.50, and for a hospital bed, it is $65. The IT cost of a car is $323.

More than such individual measures, however, companies require different measures at different points in time. It is not enough simply to measure whether a project is on time and on budget. When companies are in the early stages of building new IT systems, leaders need progress measures to tell them whether a project is on track. For example, a bank may invest in automation and artificial intelligence in order to process loans better, cheaper, and faster. It needs metrics to understand how these projects are progressing.

Later on, a company may need deployment measures that determine whether the original business case is still valid. For example, while the bank is building its new system, it might shift a lot of work to the Philippines, cutting the cost of loan processing in half. With the new system, however, the context may change and the original plan may no longer make sense.

Once a company has implemented a project, it needs realization measures that can discern whether the project has yielded the intended results. These microeconomic metrics aren’t the only way to look at the impact of technology spending, of course. Technology matters in a host of macroeconomic measures. In short, technology matters both to companies and to the larger economy.

Top performers are different from average companies. Many top performers achieve higher margins by spending their technology dollars more efficiently and with greater focus than average companies.

Consider the case of a global financial services company that for years had prided itself on its low levels of technology spending. However, the company’s gross margins were the lowest in its industry. Incidentally, its peers with higher margins had higher technology intensities. The company turned things around by rebalancing its technology spending and increasing automation. It invested hundreds of millions of dollars in technology, funded by the lower operating expenses and greater revenues it gained through automation. Now, compared with its peers, it is the only company whose gross margins are increasing faster than its change in technology spending relative to revenues.

To support this kind of digital transformation, executives must define metrics such as technology intensity as key performance indicators for the organization and benchmark their performance relative to that of competitors and companies in adjacent industries. They must then incorporate new metrics into monthly management reports and dashboards and review the role and purpose of technology investments in the light of these measures.

For their part, CIOs can embed key performance indicators into the business on the basis of metrics such as those outlined in this article, conducting regular reviews and supporting efforts to optimize performance. Executives should develop even more sophisticated metrics that truly measure the disruptions that technology fuels. Adopting best practices in these areas will enable a new generation of executive-level technology economists not only to measure what really matters to company performance but also to thrive in the technology economy.

Technological risks are becoming more prominent—and more dangerous. Six principles can guide banks as they manage them.

Technology is synonymous with the modern bank. From the algorithms used in proprietary trading strategies to the mobile applications customers use to deposit checks and pay bills, it supports and enhances every move banks and their customers make.

While banks have greatly benefited from the software and systems that power their work, they have also become more susceptible to the concomitant risks. Many banks now find that these technologies are involved in more than half of their critical operational risks, which typically include the disruption of critical processes outsourced to vendors, breaches of sensitive customer or employee data, and coordinated denial-of-service attacks. Cybersecurity alone can account for 10 percent of total information-technology spending, which is now growing at three times the rate of the budget of the technology being secured.

Exposure to these IT risks has grown in lockstep with the rapid increase in digital services provided directly to customers.1 For example, mobile transactions have expanded exponentially, presenting malicious external actors with billions of new entry points into bank systems. The complexity and growing vulnerability of the underlying IT systems are of equal concern. Big banks must manage hundreds or even thousands of applications. Many are outdated, having failed to keep pace with the radically changed processes they are supposed to support. Even banks that have successfully upgraded their infrastructure face upgrade-related risks—from project and data management to security problems that persist after the migration is complete.

When technology risks materialize, the financial, regulatory, and reputational implications can be severe. If banks lose customer data in a high-profile incident, they face legal liabilities and fleeing customers. Investors sell shares in the wake of cyberattacks, around 10 percent of which result in a more than 5 percent dip in the stock prices of the companies affected.2 Regulators penalize firms for noncompliance—from data breach–related fines to mandated remediation activities. Basel II could not be clearer on the topic: one of its seven level-one operational-risk categories is “business disruption and systems failure.”

To manage these risks, many banks simply deploy their considerable IT expertise on patching holes, maintaining systems, and meeting regulations. Some have set up specialized teams to cope with particularly acute problems, such as cybersecurity. But these half-measures are unlikely to afford sufficient protection. An IT-oriented approach, furthermore, may be unable to account for wider business implications and operational interdependencies. Institutions focused on compliance could ignore vulnerabilities outside the purview of the regulator and overlook applications critical to the business, with implications for business risk down the road.

Muddling through is no longer an option. The adequate mitigation of technology risk requires a coordinated effort that goes beyond IT-centered remedies. Leading banks are creating specialized teams within the enterprise-risk-management group to manage technology risk, in all its manifestations, across the organization. In this article, we will outline the six principles that these teams use to stay well connected and integrated with the rest of the bank, to develop the skills needed for these complex jobs, and to drive transformation and remediation activities. We conclude with some suggestions for getting these teams off to a good start.

These principles are not a step-by-step manual but rather guidance for creating best-practice technology-risk management. By adhering to them, bank leaders will be able to remain in control of the rising levels of risk associated with the digital age.

Companies can develop a complete picture of their information needs, uses, and risks only through a dialogue between IT and the business to identify the most critical business processes and information assets. The strongest controls can then be applied to the most valuable IT systems and data, the bank’s “crown jewels.” Proprietary trading algorithms stored on laptops, credit transaction data shared with third parties, and employee-health information—all may qualify. The IT-risk group should drive the assessment program, but the businesses need to be engaged with it and assume responsibility for the resulting prioritization, as they are the true risk owners. Only in this way will banks make the most effective investments in security. For example, an IT-led prioritization typically focuses too much on securing “big iron” applications while underemphasizing risks from unstructured data flowing through email and stored in collaboration platforms. For the crown jewels, remediation investments might include multifactor authentication, data-loss-prevention tools, and enhanced monitoring and analytics.

Thinking “business first” is especially important in information security. Data leaks, fraudulent transactions, blackmail, and “hacktivism” all pose dangers. Banks should consider their defenses in light of a threat’s potential adverse impact on the business, rather than defaulting to blanket security standards that ratchet up after each negative headline. Nevertheless, security and the customer experience need not be approached as a trade-off. Leading banks are finding ways to give their clients improved digital solutions that are simultaneously more secure and easier to use.

Most banks have established groups to manage some or all of the various realms in which technology risk can pop up. These typically include cybersecurity and disaster recovery—as well as, increasingly, vendor and third-party management; project and change management; architecture, development, and testing; data quality and governance; and IT compliance (exhibit). While such groups are interdependent in many ways, particularly when a new product or service is under development, they often are not formally connected.

Best-practice banks coordinate the work of the subdisciplines to capture significant risk-mitigation synergies. For example, housing crown-jewel data on servers other than those used for the main operational IT systems has implications for security, disaster recovery, and data management. Analyzing these three risks separately could lead to inadvertent gaps in risk management or to redundant overprotection. Coordinating the subdisciplines also avoids duplication of effort, such as a product manager completing a half-dozen overlapping risk reviews before product launch.

Banks have not always consistently applied the principles underlying the three lines of defense—the risk-management approach adopted by almost all financial institutions of any size—to technology risk. The three lines of defense is a more complicated approach for technology risks than for market or credit risk, for two main reasons. To begin with, the first line includes both the business and the IT function that enables it. Second, there are often “line one and a half” functions. In cybersecurity, for example, the chief information security officer (CISO) is responsible for setting policies and risk tolerances, as well as for managing operations to meet those expectations—both second-line activities. Yet the role usually resides in the first line, as part of the organization of the chief information officer (CIO). This blurring of the lines can create potentially problematic situations in which the group is “checking its own homework.” Similar boundary confusion can arise in certain subdisciplines, like disaster recovery, where both the first and second lines need real technology expertise.

Banks should carefully clarify the roles and responsibilities in managing technology risk for each line of defense. Increasingly, organizations are asking the IT-risk group to take on the policy, oversight, and assessment roles, while security operations remain within the CIO’s scope.

Careful distinctions like these are needed, for example, when institutions launch a new mobile-banking application. While the business sets out its commercial requirements, the IT group will work collaboratively to define the architectural and technical requirements. The second-line IT-risk function should be engaged from the start of such a project to identify risk exposures (such as the possibility of increased fraud or customer-identify theft) and provide an independent view on mitigation actions and feedback from testing results. Risks identified can be mitigated by the CISO and his or her team, through compensatory controls or design changes before the app is launched. This avoids the delays, cost overruns, and organizational tensions that arise from discovering exposures during a security review conducted too close to launch.

In many banks, technology-risk management is disconnected from enterprise risk management (ERM) and even from the operational-risk team. That inhibits the bank’s ability to prioritize the risks that are of critical importance and deploy the resources to remediate them. A contributing factor is often the absence of a common risk-management technology platform shared by both the IT-risk team and the ERM or operational-risk group. Without such a platform, banks struggle to aggregate risk information consistently, and managers are not equipped with the data they need to make decisions.

For example, as banks manage operational risks, they frequently balance the benefits of automation (to reduce opportunities for human error) against operational process controls (to improve behavior). Each option has advantages but also challenges—automation can introduce technology risk while operational controls can make systems unwieldy. Without a unified view of the risks involved, banks must often rely on advocates of particular initiatives when making risk-management decisions, rather than a holistic view of the available approaches and their merits. The bias can thus be to optimize within a risk category rather than to promote the good of the enterprise.

When the IT-risk group is integrated with ERM, on the other hand, real benefits can result—particularly if the technology-risk team comes under the same umbrella as other operational-risk-management teams. Decisions can be made at the level appropriate to the needs of the business and the potential severity of the risk. The business can make decisions about low-level exposures directly, while the tech- or op-risk group addresses the more significant risks and corporate ERM and senior management address the most significant ones.

Typical decisions with significant but underappreciated risk implications include those affecting a bank’s long-term architectural road map and risk-appetite decisions about testing requirements for major IT changes. When it comes to mobile apps, for example, some banks will choose to be early adopters, given the anticipated customer value, while others wait for best practices to develop. Both courses might be sensible, but only senior management should decide between them.

Two domains where ERM integration can yield great benefits are resilience and disaster recovery, and vendor and third-party management. To prevent the interruption of critical services, IT-risk managers should articulate a risk appetite that reflects the business impact of disruptions. Most banks will find that for a small percentage of their business processes, near-perfect IT resilience is essential. These are customer-initiated, time-critical processes (such as ATM withdrawals, brokerage transactions, and point-of-service purchases) with no real-time alternative. Risk investments in resilience and disaster recovery must focus on these specific processes and the relatively small number of systems that support them. For other processes, IT-risk managers should work with the IT function to define the needs for supporting processes where the appetite for risk is relatively high and banks should be able to make savings by reducing the level of support required.

IT-risk managers should also partner with the business and IT to establish standards for security, continuity, and disaster recovery for a bank’s external service providers. Given the sheer number of vendors that banks use, standards and audits must be applied in a risk-prioritized way. Banks should also consider involving their closest vendors and partners more significantly with internal ERM processes (to improve risk identification, assessment, and control) and also with incident response. Banks that use “war games” to test their crisis-response plans often find that the roles and responsibilities of third parties are outdated or poorly defined in service-level agreements, potentially leading to problems during a live breach.

Banks encourage IT managers to deliver projects on time and on budget and to maintain near-perfect levels of system availability. These objectives are obviously important, but overemphasizing them can mean that project managers do not do enough to minimize business-risk exposure. The prevailing culture encourages short-term delivery while underemphasizing long-tail but significant risks. For example, situations arise in which back-end systems are technically operational but the actual customer-facing business process is unavailable as a result of a lost database connection, for example, or a lost connection with a client and a delay while the backup system kicks in. Infrequent but high-impact outages are almost never mentioned in performance-management systems, which instead feature operational data.

To monitor risk, best-practice banks add forward-looking metrics, such as the time it takes to detect and mitigate cyberincidents, the volume of unknown devices connected to the internal network, vendors out of compliance with security requirements, and employees failing phishing tests. Leading banks also track the number of incidents and the actual recovery times for highly critical service chains, including systems supporting mobile banking, ATM services, and electronic trading. Such a performance-management system should work hand in hand with a value-assurance framework, which establishes, for each major IT project, the criteria for aligning stakeholders and the software-development life cycle. Research has shown that a failure to manage these elements is the most common cause of budget and schedule overruns.3 Aligning business and IT managers with appropriate risk-management mind-sets and behavior is critical.

Technology-risk management requires critical thinking and hands-on experience in technology, business, and risk. Individuals with all of these skills are hard to find and command high salaries—but they are indispensable. Only someone skilled in all of these areas can both effectively challenge IT teams and act as a thought partner to guide strategic decisions.

The good news for banks is that they can develop this kind of talent through part-time staffing models, training, and rotational programs. Some banks have succeeded by recruiting experienced IT specialists willing to learn risk-management skills and giving them appropriate training and a ladder for advancement. Banks can thus build a core group of IT-risk professionals with a strong knowledge of functions, technology subdisciplines, and operational-risk practices. These are essential skills for the core work of the group—exercising proper oversight from the second line of defense. They will also help the technology-risk team with other parts of the job. IT-risk managers should define architectural standards, sit on architectural-review committees, establish a consistent software-development life cycle across the enterprise, and monitor test results. They should ensure not only that individual IT changes are delivered efficiently but also that the IT environment is sustainable in the long run.

The IT-risk group must be aware of what is happening in all parts of the organization. As a bulwark of the second line of defense, it must have strong insights into the first line (both the businesses and the IT units that support them), have a strong connection to the central IT team, forge connections among the various subdisciplinary teams, and integrate its work with the core risk-management team driving ERM.

To accomplish this delicate two-step of independence and partnership, banks can consider two actions. First, they can establish a single unified mission for the IT-risk group, which should enable the core business and be a partner to other functions to improve the overall effectiveness of technology-risk management. The function’s activities in managing technology risks should focus on this vision, shared by the board and top management. The function’s mission is then to understand the specific risks facing the bank given its core operational processes and organizational structure, to identify the major challenges in remediating or managing these risks, and to allocate responsibility for the specific actions needed.

Second, banks should create effective interaction and communication models that reduce ambiguity and promote collaboration. Clear committee structures, the frequency of meetings, and reporting lines will both help avoid duplication and ensure that key functions are not left undone. In identifying and prioritizing risk, organizations can usually build on existing risk evaluations and analyses and add mechanisms to ensure collaboration.

The expectations of customers, shareholders, and regulators for the resilience of banks will continue to escalate. Recent events have exposed the ghost in the machine—how the failure of technology can cause lasting damage to an institution’s brand and reputation. Successful banks will establish an IT-risk group as a second line of defense that engages with the business and IT function while providing effective oversight and challenge. The group will also be staffed with experts in technology and risk management. With the right practices and capabilities, banks can effectively manage technology risk for the digital age.

When combined, digital innovation and operations-management discipline boost organizations’ performance higher, faster, and to greater scale than has previously been possible.

In every industry, customers’ digital expectations are rising, both directly for digital products and services and indirectly for the speed, accuracy, productivity, and convenience that digital makes possible. But the promise of digital raises new questions for the role of operations management—questions that are particularly important given the significant time, resources, and leadership attention that organizations have already devoted to improving how they manage their operations.

At the extremes, it can sound as if digitization is such a break from prior experience that little of this history will help. Some executives have asked us point blank: “If so much of what we do today is going to be automated—if straight-through processing takes over our operations, for example—what will be left to manage?” The answer, we believe, is “quite a lot.”

Digital capabilities are indeed quite new. But even as organizations balance lower investment in traditional operations against greater investment in digital, the need for operations management will hardly disappear. In fact, we believe the need will be more profound than ever, but for a type of operations management that offers not only stability—which 20th-century management culture provided in spades—but also the agility and responsiveness that digital demands.

The reasons we believe this are simple. First, at least for the next few years, to fully exploit digital capabilities most organizations will continue to depend on people. Early data suggest that human skills are actually becoming more critical in the digital world, not less. As tasks are automated, they tend to become commoditized; a “cutting edge” technology such as smartphone submission of insurance claims quickly becomes almost ubiquitous. In many contexts, therefore, competitive advantage is likely to depend even more on human capacity: on providing thoughtful advice to an investor saving for retirement or calm guidance to an insurance customer after an accident.

That leads us to our second reason for focusing on this type of operations management: building people’s capabilities. Once limited to repetitive tasks, machines are increasingly capable of complex activities, such as allocating work or even developing algorithms for mathematical modeling. As technologies such as machine learning provide ever more personalization, the role of the human will change, requiring new skills. A claims adjuster may start by using software to supplement her judgments, then help add new features to the software, and eventually may find ways to make that software more predictive and easier to use.

Acquiring new talents such as these is hard enough at the individual level. Multiplied across an organization it becomes exponentially more difficult, requiring constant cycles of experimentation, testing, and learning anew—a commitment that only the most resilient operations-management systems can support.

And if digital needs operations management, we believe it’s equally true that operations management needs digital. Digital advances are already making the management of operations more effective. Continually updated dashboards let leaders adjust people’s workloads instantly, while automated data analysis frees managers to spend more time with their teams.

The biggest breakthroughs, however, come from the biggest commitment: to embrace digital innovation and operations-management discipline at the same time. That’s how a few early leaders are becoming better performers faster than they ever thought possible. At a large North American property-and-casualty insurer, for example, a revamped digital channel has reduced call-center demand by 30 percent in less than a year, while improved management of the call-center teams has reduced workloads an additional 25 percent.

Digitization can be dangerous if it eliminates opportunities for productive human (or “analog”) intervention. The goal instead should be to find out where digital and analog can each contribute most.

That was the challenge for a B2B data-services provider, whose customized reports were an essential part of its white-glove business model. Rather than simply abandon digitization, however, the company enlisted both customers and frontline employees to determine which reports could be turned into automated products that customers could generate at will.

Working quickly via agile “sprints,” developers tested products with the front line, which was charged with teaching customers how to use the automated versions and gathering feedback on how they worked. The ongoing dialogue among customers, frontline employees, and the developer team now means the company can quickly develop and test almost any automated report, and successfully roll it out in record time.

Developing new digital products is only the beginning, as a global bank found when it launched an online portal. Most customers kept to their branch-banking habits—even for simple transactions and purchases that the portal could handle much more quickly and cheaply.

Building the portal wasn’t enough, nor was training branch associates to show customers how to use it. The whole bank needed to reorient its activities to showcase and sustain digital. That meant modifying roles for everyone from tellers to investment advisers, with new communications to anticipate people’s concerns during the transition and explain how customer service was evolving. New feedback mechanisms now ensure that developers hear when customers tell branch staff that the app doesn’t read their checks properly.

Within the first few months, use of the new portal increased 70 percent, while reductions in costly manual processing means bringing new customers on board is now 60 percent faster. And throughout the changes, employee engagement has actually improved.

The next shift redesigns internal roles so that they support the way customers work with the organization. That was the lesson a major European asset manager learned as it set out on a digital redesign of its complex, manual processes for accepting payments and for payouts on maturity. The entire organization consisted of small silos based on individual steps in each process, such as document review or payment processing—with no real correlation to what customers wanted to accomplish. The resulting mismatch wasted time and effort for customers, associates, and managers alike.

The company saw that to digitize successfully, it would have to rethink its structure so that customers could easily move through each phase of fulfilling a basic need: for instance, “I’ve retired and want my annuity to start paying out.” The critical change was to assign a single person to redesign each “customer journey,” with responsibility not only for overseeing its digital elements but also for working hand in glove with operations managers to ensure the entire journey worked seamlessly. The resulting reconfiguration of the organization and operations-management systems reduced handoffs by more than 90 percent and cycle times by more than half, effectively doubling total capacity.

The final shift is the furthest reaching: digital’s speed requires leaders and managers to develop much stronger day-to-day skills in working with their teams. Too often, even substantial behavior changes don’t last. That’s when digital actually becomes part of the solution.

About two years after a top-to-bottom transformation, cracks began to show at a large North American property-and-casualty insurer. Competitors began to catch up as associate performance slipped. Managers and leaders reported high levels of stress and turnover.

A detailed assessment found that the new practices leaders had adopted—the cycle of daily huddles, problem-solving sessions, and check-ins to confirm processes were working—were losing their punch. Leaders were paying too little attention to the quality of these interactions, which were becoming ritualized. Their people responded by investing less as well.

Digital provided a way for leaders to recommit. An online portal now provides a central view of the leadership activities of managers at all levels. Master calendars let leaders prioritize their on-the-ground work with their teams over other interruptions. Redefined targets for each management tier are now measured on a daily basis. The resulting transparency has already increased engagement among managers, while raising retention rates for frontline associates.

Mainstream advertising is expensive cluster bombing which brings ad-blocking, whereas sponsored posts in excellent blogs, such as Venitism, bring quality clients at low cost. Small is beautiful! Most organizations are now marketing with sponsored posts, going beyond the traditional sales pitches and instead enhancing brands by publishing or passing along relevant information, ideas, and entertainment that customers will value. Sponsored post is the most respected social medium. https://venitism.wordpress.com is a platform of unique sponsored posts.

The success of sponsored posts has radicalized the way companies communicate. The sponsored post revolution signals more than a mere marketing fad. It marks an important new chapter in the history of business communications, the era of corporate enlightenment. The phenomenon of sponsored posts has unfolded rapidly because it responds to consumer preference. Most people would rather learn about a company via a sponsored post than an ad. Sponsored posts in Venitism allow companies to react in real time, provide increased transparency, and create a strong brand identity at a fraction of the price of traditional marketing tactics, and in less time.

Sponsored posts in https://venitism.wordpress.com can be the means by which a brand shapes and impacts business and consumer landscapes. Sponsored posts can be a thoughtful investment in a company’s legacy. Armed with quality sponsored posts, corporations can become thought leaders, change agents, and experts. They can, in fact, become enlightened. Branded content is a powerful movement, and for good reason. In an always-on digital world, netizens have places to go and destinies to meet. To get their attention, you have to offer something valuable in return with no more than a couple links. Great stories persuade by uniting an idea with an emotion. Weave a story with information that makes your audience’s heart beat faster, and you have a good chance of winning them over.

https://venitism.wordpress.com offers sponsored posts for companies that want to highlight their thought leadership expertise, while gaining objectivity and credibility through content marketing initiatives. As a sponsor, your company can gain a powerful advantage, exceptional visibility, and access to a global audience. Venitists have power, influence, and potential. They are successful executives, independent thinkers who embrace new ideas, rising stars who are aiming for the top. Our readers demand accurate, original reporting, untainted by establishment spin and linkbait. Venitism is a first-in-class blog that provides warp speed, on the ground reporting from anywhere in the world. For more information, email venitis@gmail.com

Advertisements

About DAZZLED💙

Life is like a bunch of roses. Some sparkle like raindrops. Some fade when there's no sun. Some just fade away in time. Some dance in many colors. Some drop with hanging wings. Some make you fall in love. The beauty is in the eye of the beholder. Life you can be sure of, you will not get out ALIVE.(sorry about that)