Article | March 5, 2020
Concurnas is a new general purpose open source JVM programming language designed for building concurrent, distributed and parallel systems. Concurnas is easy to learn; it offers incredible performance as well as many features for building modern, enterprise scale computer software. What distinguishes Concurnas from existing programming languages is that it presents a unique, simplified means of performing concurrent, distributed and parallel computation. These forms of computation are some of the most challenging in modern software engineering, but with Concurnas they are made easy.
Article | May 17, 2021
Common view is that AI software adoption is 'on its way' and it will soon replace many jobs (example self-driving cars with drivers etc.) and the majority of companies are starting to embrace the efficiencies that AI brings now.
Being a practitioner of AI software development and being involved in many projects in my company AI Technologies, I always found my direct experience in the field in contrast with what the media generally portraits about AI adoption.
In this article I want to give my view on how AI projects affect the work dynamics into clients work processes and compare that with the studies available on the impact of AI and new technologies on work. This should help the reader, especially if he is an executive, to set the right expectations and mentality when he is assessing the potential investment into a new AI project and if his company is ready for it.
To start with, any software development project, including AI, can be summarized into 3 stages: proof of concept (POC) when the prototype has been built, product development when the software is actually engineered at scale, live support/continuous improvements. It occurs often that projects in AI will not go pass the POC stage and this is often due to
1) not right IT/data infrastructure in place
2) not specialist people have been hired to handle the new software or digital transformation process has not been planned yet.
Regarding point 2, the most difficult issue is around hiring data scientists or data/machine learning engineers because many companies struggle with that.
In fact, in a March 2021 O’Reilly survey of enterprise AI adoption, it has been found that “the most significant barrier to AI adoption is the lack of skilled people and the difficulty of hiring.” And in 2019 software it has been estimated that there were around 144,000 AI- related job openings, but only around 26,000 developers and specialists seeking work.
Of course hiring an internal data scientist, it is not the only problem in restructuring the workforce. Often a corporation has to be able to re-train entire teams to be able to fully benefit from a new AI software.
I can give an example. As many readers know a sales process involves 3 stages: lead generation, q&a call/mails with potential clients and deal closing. Now, a couple of years ago AI Technologies had been engaged to automatize the q&a call stage and we build a ai bot to manage the 'standard questions' a potential client may ask (without getting into the details, using AI and technically word3vec encoding, it is very possible to automate mails/chatbot for 'standardized questions' like 'how much it cost?' 'how long is the warranty for' etc.). Using this new internal solution, it meant the team responsible for the q&a would have been retrained either to increase the number of leads or the number of closing. The company simply decided to not embark into the transformation process required to benefit the new AI adoption.
This example, in various forms, it is actually quite common: companies unless they are really innovative prefer to continue with their corroborated internal procedures unless some competitors threat their profitability.
This bring to the fact that actually AI is not an out of the shelves solution which can be plugged in with no effort. As the moment a POC is under development it should be a good norm to plan a digital transformation process within the company.
Also it is worth mentioning that, it is unlikely that the workforce has to be dismissed or made redundant as many expected following AI adoption. Just following the example above, what the AI bot does actually is to get over the repetitive tasks (q&a) so people can do more creative work engaging more clients (lead generation) or convincing to buy ( deal closing). Of course, it means that some people have to be retrained but also means that with the same people, you can close/generate more sales.
It is a misconception to think that AI solutions will make human work redundant , we just need to adapt to new jobs.
My example resembles a classical example on adoption of ATMs. When ATMs were introduced in 1969, conventional wisdom expected the number of banking locations to shrink, but instead, it actually made it possible to set up many more of them, it became cost-effective. There were under 200,000 bank tellers in 1970, but over 400,000 a decade later.
The other common problem to face when companies want to embrace AI adoption (point 1), it is their current infrastructure: databases, servers, and crm systems have to be already in place. To put it simply, any AI system requires data to work with so it naturally sits on top of data infrastructure in day to day business operations.
In the last two years AI Technologies has been engaged to work with a large public organization (70,000 employees) to build a solution to automatically detect malicious behavior of its employees manipulating their data. To build the AI software we had also designed a system to stream data from each employee terminal into a central database for processing. This infrastructure was not present at the beginning of the project since before the need for malicious detection was arised, the organization never really realized the necessity to gather certain data: a simple login and logout time was all the needed to monitor the activity of their employees (which company folder/file they accessed etc. was not important).
This is a common situation and most of the companies' infrastructure are usually not ready to be used directly with AI solutions: their current infrastructure was simply designed with other objectives in mind.
For sake of completeness, most companies decide to invest their internal resources in other areas of the business rather than crm or expensive data structures. There is no blame on this choice, at the end any business has to be profitable and investing in infrastructure is not always easy to quantify the return of investment.
If anything, this article should have given an idea of the major pitfalls approaching AI projects which can be summarized as follows:
• AI solutions are not out of the shelves , ready made software that can be immediately put in use: they often require new skilled hires within the client organization and potentially a plan how to re-utilized part of the workforce.
• It is often a myth that AI solutions will necessarily replace the employees although it is possible that they have to be retrained.
• Any AI project works on data and infrastructure which are necessary to benefit the new solutions. Before embarking on AI projects an organization has to either budget in a new infrastructure or at the very least an upgrade of the one in use.
In essence, due to the implication on both employees and infrastructure, AI adoption should be considered as a digital transformation process more than a software development project. After the overwhelming hype of attention of the recent years, I would expect that in the next 2-3 years more companies will start to realize what AI projects really are and how to best use them.
Article | March 15, 2021
Cognitive pathway and heuristic evaluation are two methods used to assess systems’ usability based on cognitive engineering, which uses cognitive psychology knowledge to understand the human mind’s skills and limitations to develop user-friendly systems.
The cognitive pathway is the evaluation of a system's ease of learning by exploring its interface in which the user's tasks are broken down into actions necessary to perform a given task, and heuristic evaluation is the evaluation of the usability level of the user interfaces through basic rules.
Although these two methods are part of the UX discipline, we Tech Writers also use them. Not ipsis litteris, obviously, but abstractly because of what originates them: cognition and heuristics.
Cognition is the psychological function that acts in the acquisition of knowledge through logical associations. Heuristic is the human capacity characterized as discovery and/or invention focused on solving problems through experience and creativity.
The documentation requests we receive do not always come with in-depth technical and functional specifications. It is not uncommon for us to venture into the system alone to learn and discover how, when, where and why certain functionality is being launched.
In our learning and discovery process, we put ourselves in the user's shoes and decompose the necessary actions to carry out their tasks. We test patterns, experiment with basic (and alternative) flows using our experience and creativity to produce complete and detailed documentation.
As we develop knowledge in the system we document, we mainly use the recognition heuristic based on the uptake of memories and the recognition of alternatives. We start having dèja-vu when we perceive similarities between the functionalities we already documented.
Even if our recognition process becomes more organic and fluid, we still smell rats: why does this form have an extra field? What happens if I fill this field with another value?
Tech Writers need to be partly laymen, as there has to be a certain lack of recognition for further inferences to exist. That is why having no background in systems technology can, contrary to popular belief, be extraordinarily favorable for Tech Writers’ training.
Article | August 5, 2020
The quest for a magical cure-all is not new. In the days of the old west, a medicine wagon would roll into town, and a slick salesman would emerge, pitching his latest panacea, promising that his miracle elixir could treat anything from a hangnail to heart disease. The claims were alluring. But they were based on half-truths and exaggerations that exploited the desperation of settlers, some of whom were desperately ill, and others terrified at the thought of being unprepared for tragedy.