A Guide To No Code & Low Code Software Development

| January 16, 2020

article image
It’s a well-known fact that software development is about how efficiently codes are written and how they are run to see the functioning of the application. This is a regular process that is followed by almost all software development companies. However, there’s the latest trend that talks about the development of software applications by using fewer codes or no codes at all! No code & low code both may seem similar, but there’s difference between them and the developers who are new with this need to know what they are and how beneficial it is to adopt them by software.

Spotlight

Netitude Ltd

Netitude is the managed IT service provider of choice. With a client list that spreads from education to manufacturing we are meticulous about finding the right solution for each individual client. We design a unique combination of software and hardware for each customer, providing reliable and resilient IT.

OTHER ARTICLES

Culture of Innovation and Collaboration: Hybrid Cloud, Privacy in AI and Data Caching

Article | August 14, 2020

Red Hat is continually innovating and part of that innovation includes researching and striving to solve the problems our customers face. That innovation is driven through the Office of the CTO and includes OpenShift, OpenShift Container Storage and use cases such as the hybrid cloud, privacy concerns in AI, and data caching. We recently interviewed Hugh Brock, research director for the office of the CTO, here at Red Hat about these very topics.

Read More

How Governments Have Used AI to Fight COVID-19

Article | March 29, 2020

Governments all around the globe are using artificial intelligence (AI) to help fight against the ongoing COVID-19 pandemic. The technology is being used for various different things, including speeding up the development of testing kits and treatments, giving citizens access to real-time data, and tracking the spread of the virus. South Korea’s government, one that is being touted as an example for how to combat the virus, pushed their private sector to start developing testing kits right away, immediately after the reports began to arrive out of China. One of those companies was Seoul-based molecular biotech company Seegene, which used AI to help quicken the process of developing testing kits. The company was able to submit its solution to the Korea Centers for Disease Control and Prevention (KCDC) just three weeks after the scientists began their work. According to Chun Jong-Yoon, founder and chief executive of the company, the process would have taken at least two to three months without the use of AI.

Read More

IN AI (CAN) WE TRUST?

Article | April 20, 2021

Artificial intelligence (AI) is the best thing to happen to our lives. It helps us read our emails, complete our sentences, get directions, do online shopping, get dining and entertainment recommendations, and even make it easier to connect with old friends or make new ones on social media. AI is not only skilling itself at many human jobs; it is also making decisions for us. The question is whether these decisions can be trusted. To elaborate, does AI-aided recruitment facilitate or reject the right candidate selection? Is the Tinder match made in heaven or by the algorithm? Who is being sent to jail — criminals or innocents predicted by AI bias? As humans, we come from a diverse range of sociopolitical, racial and cultural backgrounds. The idea of what is right — and the mere question of morality itself — changes depending on the context. How does the AI decide what is right — and for whom? Faced with the decision to save the driver in a smart car or the pedestrian, who does the onboard AI choose? How does it arrive at this decision? "Debiasing humans is harder than debiasing AI systems," believes Olga Russakovsky, an assistant professor in the Department of Computer Science at Princeton University A Question Of Ethics Before AI can think for humans, humans have to think for AI. Essentially, the ethics of AI technology is the embodiment of its creators' ethics. And this is where the "ethical AI conundrum" begins. AI is good and evil, but the truth is that the underlying concern that dominates every invention or innovation is human bias. There is enough evidence pointing in this direction, the recent and most prominent one being Apple. In 2019, the company's new credit card was accused of offering some women a lower limit despite them having better credit scores than their male spouses. Of such intensity was the bias that Apple co-founder Steve Wozniak noted that his wife got a lower credit limit than he did despite the fact that they had "no separate bank or credit card accounts or any separate assets." AI is open to biases because it makes decisions based on its human creators' information, and this information contains biases. Many of the creators are males who grew up in the western world, which can predispose them to individual communities and geographies. There has been enough debate around COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm that courts in the United States devised to anticipate the likelihood of repeat offenders. The algorithm indicated twice as many false positives for black offenders (45%) as white offenders (23%). Garbage In, Garbage Out TechTarget defines the concept of "garbage in, garbage out" this way: "The quality of the input determines the quality of output." Apart from humans, bias can also permeate a machine's intelligence. After all, as B Nalini noted, it is humans who frame the problem, train the model and deploy the system. Even with unbiased data, there is no guarantee of accuracy, as the very process by which machine learning models achieve this can yield biased outcomes. Teaching AI Morality In a 2001 article, futurist and inventor Raymond Kurzweil stated that our view of progress is linear. The more we adapt to change, the rate of change itself increases exponentially. We may expect to see 20,000 years of progress in the decades encompassing the 21st century. However, even while we acknowledge the exponential growth, we must also accept that AI is a relatively new technology. The word itself came into existence a mere 60 years ago, meaning we are closer to the beginning or maybe even in the middle rather than the end. AI is just a toddler, learning the differences between moral right and wrong and inheriting its creators' biases. It still struggles to do much more than detect statistical patterns in large datasets. Human understanding and intelligence extend far beyond static ideas of right and wrong, the rules themselves changing according to sociocultural and historical contexts. If, as humans, we are still struggling with morality, it is rather presumptuous of us to expect a machine — that we have created — to outshine us in this regard. As the Harvard Business Review noted, there are two conclusions. The first involves acknowledging how AI can help improve the process of human decision-making itself by predicting outcomes from available data while disregarding variables that lead human decision-makers to generalize and segregate without even realizing their inherent biases. The second alludes to a more complicated need to technically define and measure the ever-fleeting idea of "fairness." Conclusion Bias is as fundamental as the air we breathe or the environment we live in, and it is prevalent among us all, either as individuals or as a community. At this point in human history, the world is getting ready to industrialize AI tech and deploy it more widely. Thus, addressing the "inherent" AI biases at this moment becomes exceptionally critical. AI is just a toddler, learning the differences between moral right and wrong and inheriting its creators' biases. If, as humans, we are still struggling with morality, it is rather presumptuous of us to expect a machine that we have created will outshine us in this regard Just as a pet blindly mirrors its trainer's instructions and personality, AI mirrors its creators' input, biased or not. Thus, the root of the problem goes far deeper than AI ethics but becomes a question of human morality and the concept of "fairness" itself and how it can be defined and measured. "Debiasing humans is harder than debiasing AI systems," believes Olga Russakovsky, an assistant professor in the Department of Computer Science at Princeton University and co-founder of the AI4ALL Foundation, which works to increase diversity and inclusion within AI. "I am optimistic that automated decision making will become fairer," she mentioned in an interview with Wired. First printed in Forbes on Feb 9, 2021Enable GingerCannot connect to Ginger Check your internet connection or reload the browserDisable in this text fieldRephraseRephrase current sentenceEdit in Ginger×

Read More

5 WAYS SNOWFLAKE CAN HELP LIFE SCIENCES BECOME DATA-DRIVEN

Article | August 11, 2020

The life sciences industry is at a turning point. According to Deloitte, “to prepare for the future and remain relevant in the ever-evolving business landscape, biopharma and medtech organizations will be looking for new ways to create value and new metrics to make sense of today’s wealth of data.” For life sciences companies using outdated legacy on-premises and cloud database systems, however, the exploding volume and variety of data pose significant management and security challenges.

Read More

Spotlight

Netitude Ltd

Netitude is the managed IT service provider of choice. With a client list that spreads from education to manufacturing we are meticulous about finding the right solution for each individual client. We design a unique combination of software and hardware for each customer, providing reliable and resilient IT.

Events