In this article, we have selected five computer science challenges that, we believe, will shape small, medium and large businesses in the years to come.
The most challenging problems faced by business have much in common, and to be able to survive the hype cycle they require commercial appeal and proven potential.
For example, cloud computing alone is expected to reach a business volume of 47 billion dollars by 2017. And cloud/big data is just an enabling technology.
1) Autonomous Navigation
Cars, airplanes, drones, you name it. Autonomous navigation has been at the forefront of the artificial intelligence sector of the planning community for more than forty years. The ability of an autonomous agent to move with perfect knowledge of its surroundings is improving at a rapid clip, almost to the point of optimal performance. But that progress is undermined if knowledge is imperfect.
For someone driving a car, deciding what to do when a person crosses the street is an everyday occurrence. However, from the technical point of view the car cannot know beforehand when that event is going to happen, that is, when someone is going to decide to cross. The driver’s knowledge is thus imperfect and therefore cannot be planned until the event is underway. New low-cost high-throughput hardware is enabling real-time processing and enhancing long-running strategic plans with second-by-second changes.
Sensor technology is advancing rapidly and having enough input data is becoming a non-issue. Normal & IR cameras and distance sensors (LIDAR) are just a few of the technologies that were until recently unthinkable in low-cost devices.
Soon, having enough data will not be as big a problem as making sense of it.
Challenge: Allowing a computer to keep track of all possible outcomes as it decides what to do when someone crosses the street. Processing and making decisions with constant data streams is and will be the challenge.
2) Behavioral Patterns
There is much literature on the successful identification of consumer patterns, but none is more widely known than the “beers and diapers on Friday” (what’s that?) correlation which has been at the forefront of the sales pipeline in the business intelligence field. Though it may seem like a myth, the case still merits description insofar as it demonstrates why finding correlations makes sense.
Big data/cloud computing is more important than ever, but it is also becoming a commodity; huge storage capacity, cheaper chips, and specific purpose hardware like GPUs, FPGAs, and ASICs are crucial to this trend.
From the business perspective, C-Level executives (CEOs, CIOs, CFOs) are in a conundrum. Having the infrastructure to handle the data is cheap, knowing what to do with it is not.
The general approach of companies not at the forefront of this field seems to be “Better to keep the data even if I don’t know what the hell I can do with it”
Challenge: Understanding what our users do, why they do it, and how they interact with our services has always been, and will (for the foreseeable future) continue to be, the business driver behind growth.
Data can help us, but only if we know what to ask of it.
3) Natural Language Understanding (NLU)
The ability to understand the meaning of speech waveforms or of written text as an enabling technology would be a huge advance in several fields. NLU can be thought of as the decoding process that transforms the linguistic information in sound/speech/ text into actions.
The challenges that must be overcome in the field of natural language understanding are many, from improving consistency in the knowledge-extraction process in big volumes of data to mimicking the ability of the human brain to make inferences with incomplete data (jump to conclusions).
Businesses will need to map their acquired knowledge to enable more natural queries over their datasets, better understanding of users’ goals and intentions, and further development of tools to represent semantic information. Data must be treated as a first-class citizen in the decision-making process.
Challenge: Designing algorithms robust enough to handle the different representations out there, to self-adjust, and to be able to “explain” to the user why a certain decision has been made.
4) Brain-Computer Interface (BCI)
Capturing and decoding electrical brain activity and using it to control external devices seems like a plot from a science fiction movie. The idea isn’t novel: ever since the 60s we have been trying (with some degree of success) to harness the data streams generated inside our brains.
It is not hard to see the appeal of this technology and its potential applications. From assistive and biomedical technology like artificial limbs and communication mechanisms for disabled people (key to their social integration) to more mundane tasks such as measuring straight from the brain how “good“ a commercial or product is.
Challenges: There are two main challenges facing the BCI field. Creating technology that is resilient, accurate and non-invasive for widespread adoption; and devising algorithms that can sift through data and discover interesting patterns to engine action.
Advances on both fronts are necessary if BCI is going to make a dent in the real world. However, once ready for adoption, BCI has the potential to radically change how we interact with computers and our surroundings.
This is one of the most hyped, and least understood, areas of technological advancement. Biotech challenges are not only computational, they are also ethical; what do we do with what we can accomplish?
The business appeal of biotechnology is huge to any business in the “bioeconomy,” regardless of phase in the process: development, production, or application of biological products and processes.
The application of biotechnology to primary production, health and industry would prove a huge boost to the emerging “bioeconomy,” where biotechnology generates significant economic output. The “bioeconomy” in 2030 is likely to involve three elements: advanced knowledge of genes and complex cell processes, renewable biomass, and the integration of biotechnology applications across sectors.
The challenges facing bioinformatics are many: protein-structure prediction, protein-protein interactions, gene-expression analysis, and gene finding, to name just a few. All of them require sifting through huge datasets with very high levels of variability. To put it into perspective, with just five human genomes you can fill a one-terabyte drive. With the fastest sequencers, you can produce enough data in a month for half a year of processing.
Challenge: The main challenge is how to efficiently handle the data variability in light of the discovery that the most challenging diseases like autism, Alzheimer’s disease and schizophrenia are correlated with more complex genetic variation.
The common trait shared by all these problems is difficulty; there are not, at present, any clear cut solutions, and there may never be. These problems are so complex that we don’t know all the variables involved or how they interact. And, even if we knew all the variables, we would not necessarily have the technological capability to measure them.
There are also major business challenges. Computer scientists and engineers will ensure that we soon have more efficient computers with greater storage potential and better algorithms. But unless we radically change how we approach the human-computer interaction to cope with the intrinsic complexity of the problems described here, we are a long way from harnessing that power.
The good news is that many of these problems can potentially be tackled with machine learning and statistical methods like deep learning (subject of a post coming soon).
With all the advances in data acquisition, the ultimate challenge is to start asking the right questions.