Article | August 13, 2020
The coronavirus outbreak in China has grown to a pandemic and is affecting the global health & social and economic dynamics. An ever increasing velocity and scale of analysis — in terms of both processing and access is required to succeed in the face of unimaginable shifts of market; health and social paradigms. The COVID-19 pandemic is accompanied by an Infodemic. With the global Novel Coronavirus pandemic filling headlines, TV news space and social media it can seem as if we are drowning in information and data about the virus. With so much data being pushed at us and shared it can be hard for the general public to know what is correct, what is useful and (unfortunately) what is dangerous. In general, levels of trust in scientists are quite high albeit with differences across countries and regions. A 2019 survey conducted across 140 countries showed that, globally, 72% of the respondents trusted scientists at “high” or “medium” levels. However, the proportion expressing “high” or “medium” levels of trust in science ranged from about 90% in Northern and Western Europe to 68% in South America and 48% in Central Africa (Rabesandratana, 2020).
In times of crisis, like the ongoing spread of COVID-19, both scientific & non-scientific data should be a trusted source for information, analysis and decision making. While global sharing and collaboration of research data has reached unprecedented levels, challenges remain. Trust in at least some of the data is relatively low, and outstanding issues include the lack of specific standards, co-ordination and interoperability, as well as data quality and interpretation. To strengthen the contribution of open science to the COVID-19 response, policy makers need to ensure adequate data governance models, interoperable standards, sustainable data sharing agreements involving public sector, private sector and civil society, incentives for researchers, sustainable infrastructures, human and institutional capabilities and mechanisms for access to data across borders.
The COVID19 data is cited critical for vaccine discovery; planning and forecasting for healthcare set up; emergency systems set up and expected to contribute to policy objectives like higher transparency and accountability, more informed policy debates, better public services, greater citizen engagement, and new business development. This is precisely why the need to have “open data” access to COVID-19 information is critical for humanity to succeed. In global emergencies like the coronavirus (COVID-19) pandemic, open science policies can remove obstacles to the free flow of research data and ideas, and thus accelerate the pace of research critical to combating the disease. UNESCO have set up open access to few data is leading a major role in this direction. Thankfully though, scientists around the world working on COVID-19 are able to work together, share data and findings and hopefully make a difference to the containment, treatment and eventually vaccines for COVID-19.
Science and technology are essential to humanity’s collective response to the COVID-19 pandemic. Yet the extent to which policymaking is shaped by scientific evidence and by technological possibilities varies across governments and societies, and can often be limited. At the same time, collaborations across science and technology communities have grown in response to the current crisis, holding promise for enhanced cooperation in the future as well.
A prominent example of this is the Coalition for Epidemic Preparedness Innovations (CEPI), launched in 2017 as a partnership between public, private, philanthropic and civil society organizations to accelerate the development of epidemic vaccines. Its ongoing work has cut the expected development time for a COVID-19 vaccine to 12–18 months, and its grants are providing quick funding for some promising early candidates. It is estimated that an investment of USD 2 billion will be needed, with resources being made available from a variety of sources (Yamey, et al., 2020).
The Open COVID Pledge was launched in April 2020 by an international coalition of scientists, lawyers, and technology companies, and calls on authors to make all intellectual property (IP) under their control available, free of charge, and without encumbrances to help end the COVID-19 pandemic, and reduce the impact of the disease. Some notable signatories include Intel, Facebook, Amazon, IBM, Sandia National Laboratories, Hewlett Packard, Microsoft, Uber, Open Knowledge Foundation, the Massachusetts Institute of Technology, and AT&T. The signatories will offer a specific non-exclusive royalty-free Open COVID license to use IP for the purpose of diagnosing, preventing and treating COVID-19.
Also illustrating the power of open science, online platforms are increasingly facilitating collaborative work of COVID-19 researchers around the world. A few examples include:
1. Research on treatments and vaccines is supported by Elixir, REACTing, CEPI and others.
2. WHO funded research and data organization.
3. London School of Hygiene and Tropical Medicine releases a dataset about the environments that have led to significant clusters of COVID-19 cases,containing more than 250 records with date, location, if the event was indoors or outdoors, and how many individuals became infected. (7/24/20)
4. The European Union Science Hub publishes a report on the concept of data-driven Mobility Functional Areas (MFAs). They demonstrate how mobile data calculated at a European regional scale can be useful for informing policies related to COVID-19 and future outbreaks. (7/16/20)
While clinical, epidemiological and laboratory data about COVID-19 is widely available, including genomic sequencing of the pathogen, a number of challenges remain:
1. All data is not sufficiently findable, accessible, interoperable and reusable (FAIR), or not yet FAIR data.
2. Sources of data tend to be dispersed, even though many pooling initiatives are under way, curation needs to be operated “on the fly”.
3. In addition, many issues arise around the interpretation of data – this can be illustrated by the widely followed epidemiological statistics. Typically, the statistics concern “confirmed cases”, “deaths” and “recoveries”. Each of these items seem to be treated differently in different countries, and are sometimes subject to methodological changes within the same country.
4. Specific standards for COVID-19 data therefore need to be established, and this is one of the priorities of the UK COVID-19 Strategy. A working group within Research Data Alliance has been set up to propose such standards at an international level.
Given the achievements and challenges of open science in the current crisis, lessons from prior experience & from SARS and MARS outbreaks globally can be drawn to assist the design of open science initiatives to address the COVID-19 crisis. The following actions can help to further strengthen open science in support of responses to the COVID-19 crisis:
1. Providing regulatory frameworks that would enable interoperability within the networks of large electronic health records providers, patient mediated exchanges, and peer-to-peer direct exchanges. Data standards need to ensure that data is findable, accessible, interoperable and reusable, including general data standards, as well as specific standards for the pandemic.
2. Working together by public actors, private actors, and civil society to develop and/or clarify a governance framework for the trusted reuse of privately-held research data toward the public interest. This framework should include governance principles, open data policies, trusted data reuse agreements, transparency requirements and safeguards, and accountability mechanisms, including ethical councils, that clearly define duties of care for data accessed in emergency contexts.
3. Securing adequate infrastructure (including data and software repositories, computational infrastructure, and digital collaboration platforms) to allow for recurrent occurrences of emergency situations. This includes a global network of certified trustworthy and interlinked repositories with compatible standards to guarantee the long-term preservation of FAIR COVID-19 data, as well as the preparedness for any future emergencies.
4. Ensuring that adequate human capital and institutional capabilities are in place to manage, create, curate and reuse research data – both in individual institutions and in institutions that act as data aggregators, whose role is real-time curation of data from different sources.
In increasingly knowledge-based societies and economies, data are a key resource. Enhanced access to publicly funded data enables research and innovation, and has far-reaching effects on resource efficiency, productivity and competitiveness, creating benefits for society at large. Yet these benefits must also be balanced against associated risks to privacy, intellectual property, national security and the public interest.
Entities such as UNESCO are helping the open science movement to progress towards establishing norms and standards that will facilitate greater, and more timely, access to scientific research across the world. Independent scientific assessments that inform the work of many United Nations bodies are indicating areas needing urgent action, and international cooperation can help with national capacities to implement them. At the same time, actively engaging with different stakeholders in countries around the dissemination of the findings of such assessments can help in building public trust in science.
Article | October 21, 2020
Consciousness—it’s one of the biggest questions out there.
One thing that people today have in common with those from the earliest ages is questioning consciousness and our own existence. It’s taken different forms through the years, but the questions are largely similar at their core and the answers are still at large after all this time.
The good news is that while we don’t have the answers yet, or even a timetable for when we might get those answers, we know more now than we have at any other point in human history.
It’s easier to share information than it ever was in the past, and in this era, even an average person can study the big questions about life without the requirement of a formal education or access to a university.
But in this age where it’s easier to ask questions, what kind of answers are we actually being led towards?
We could be in a simulation—but not in the way you think
One of the more modern theories on consciousness proposes that we might be living in a simulation. And modern really is the right word to describe this one, because it would have been an unthinkable idea even 20 years ago.
However, as computing has grown stronger and stronger over the years, a key question was raised by these advancements: Is it possible that somewhere, computers are already powerful enough to run an entire universe? And if that’s the case, are we living in one of these simulations?
While this sounds outlandish, it’s certainly a theory that has at least some support. That includes support from Elon Musk, who says we probably are living in a simulation.
Don’t think, however, that we’re living in some version of the Sims catered to an alien audience. Games might be the first thing that comes to mind for us when simulations are brought up, but a more serious answer is quite a bit different from that idea.
Rather than a game, such a simulation may be for, to put it simply, historical purposes. That is to say, instead of some advanced alien civilization running their own simulated universe, it may be advanced humans from the future simulating the lives of their ancestors.
But this simulation of the past would be real enough that for the simulated person on the other side, everything feels real and there’s no way to tell that it is a simulation.
This isn’t just an idea from science fiction, as much as it might sound like one. It was proposed by Nick Bostrom, an Oxford professor. There’s a lot of possible reasons why a future society might want to run a simulation in this way, ranging from studying history to preserving the records of the past.
If you don’t think this would be possible from a technical perspective, just consider the jump in quality between early computers and the computers of today. Computing has already improved exponentially within our lifetime. In the very far future, this growth may have continued to heights that would have been unimaginable previously, just like computers today would have been unimaginable to someone used to the first computers.
Quantum mechanics could be part of the explanation
We don’t know much about how the brain works. While there’s been a lot of scientific progress since the questions around the nature of consciousness were first raised, there’s still a long way to go in figuring out just what makes the brain tick so to speak.
Quantum mechanics, however, is good at explaining these kinds of things that don’t operate along the regular laws of physics. It’s hard to explain exactly how quantum mechanics work also, but we do know a bit more about them than we know about the brain.
Essentially, if you break things down to a small enough level, they begin to respond differently. Some of the laws and theories that would have dictated their behavior previously begin to behave more loosely.
Take a toothpick for example. You can move it around or drop it or throw it and it follows the same laws of physics. And if you snapped it in half, those halves would also follow the same rules. However, if you kept doing this until you reached a certain tiny, microscopic level, things would get weird.
But just saying that the brain might work on quantum mechanics doesn’t actually explain much. After all, that statement says nothing about what these mechanics may actually do, and more importantly, what that means for us.
Fortunately, though, more detailed theories on the subject do exist. It’s been said that quantum mechanics could explain these different quantum laws working with our brain to create consciousness from a “fourth dimension” around us.
Quantum laws may also dictate that particles behave differently depending on if they’re being observed or not. Of course, the definitions are complex. Observation is a general term that doesn’t literally mean looking at something in the context that the word would come up in a regular conversation.
But at least in theory, it’s possible that much of how we experience the world has to do with our brain observing and interacting with particles around us at the quantum level. These observations may be a basic building block behind everything—a source code, so to speak, for the universe at large.
Universal consciousness remains a theory
Universal consciousness might be the oldest theory on this list. It predates the more modern ideas mentioned with quantum mechanics and the simulation theory, but there’s enough anecdotal evidence surrounding the subject to at least consider it.
It’s not complicated such as quantum mechanics.
On the other hand, it’s probably the easiest theory to understand between the three. It’s the idea that essentially we come from the same place, or that consciousness itself is an extension of the universe.
This belief has been seen in religions from differing times and places, with Buddhism notably claiming that consciousness is around us everywhere. It’s not just Buddhism that has reflected these ideas, however.
There’s many anecdotal stories over the years of people who have been close to death or have medically died and believed that during these experiences, they’ve become one with the universe or something else along those lines.
Of course, these stories won’t hold up in the opinion of the scientific community and it’s obviously hard to study this kind of phenomenon in a meaningful way.
But to consider a subject like consciousness, something that we don’t understand, entirely using the same scientific methods used for other things may be a mistake.
After all, the concept of the universal mind has been around since at least 480 B.C., when it was introduced by Anaxagoras, a philosopher from before the time of Socrates. While this much time passed doesn’t necessarily mean the theory is true, a lot of people have put their belief behind it between that time period and now.
Optimism about the future
Earlier in this article, we mentioned Elon Musk’s belief that humanity is living in a simulation. It’s not the only time Musk has spoken about things that would be considered outlandish by a lot of people.
He’s spoken of other things that might as well sound like something out of a science fiction novel, such as the threat of artificial intelligence.
When Musk did speak about AI, however, he had a notable quote that didn’t have to do directly with that specific subject matter at all. Rather, it was a general outlook on philosophy and life.
“You kind of have to be optimistic about the future. There’s no point in being pessimistic,” Musk said. “I’d rather be optimistic and wrong than pessimistic and right.”
It’s philosophical advice worth keeping in mind.
The fact of the matter is, we don’t have the answers. There’s various places to draw the answers from, whether it’s conventional theories or these newer modern ones about simulations and quantum physics, or even religions which have been around for hundreds or thousands of years.
Whatever you do believe about the mind, or even if you don’t believe anything at all and you’re just waiting to see what answers scientists come up with in the future, keep your head up.
When the answers aren’t around yet and all of them could be wrong, you can only keep a positive outlook on things and hold a hope that your preferred theory is one with truth behind it.
Article | February 26, 2020
When talking about advents of artificial intelligence, we hear a lot about its adversarial attacks, specifically those that attempt to “deceive” an AI into believing, or to be more accurate, classifying, something incorrectly. For example, autonomous vehicles can be fooled into “thinking” stop signs are speed limit signs, pandas being identified as gibbons, or even having your favorite voice assistant be fooled by inaudible acoustic commands. Such examples showcase the narrative around AI deception. In another form, AI can be deceptive in manipulating the perceptions and beliefs of a person through “deepfakes” in video, audio, and images. The significant AI conferences held around the world are more frequently addressing the subject of AI deception too. And yet a lot of debates and discussions are happening on how we can defend against it through detection mechanisms
Article | March 26, 2020
Search and AI-driven analytics provider ThoughtSpot recently announced a collaboration with Google Cloud to launch Embrace for Google Cloud Platform that will enable enterprises to perform search and AI-driven analytics directly in Google BigQuery. The launch of Embrace will help enterprises leverage the dual power of Google BigQuery and ThoughtSpot’s augmented analytics. The combined delivery will enable organizations to derive proper insights and help them in taking appropriate actions. “Enterprises have more data at their disposal than ever before. The problem arises, however, when they look to turn that data into insights that can transform how their business operates. The old analytics stack is too slow and cumbersome to deliver the value they need from their data,” said Seann Gardiner, SVP of Business Development & GM of Embrace, ThoughtSpot. “Embrace for Google Cloud exemplifies the new, cloud-native, AI-powered analytics stack required to rewrite this equation and drive true transformation for our customers.”