In Fortune´s article it is claimed that using AI to improve company´s value creation, the benefits reach far beyond financial improvements. They can be clearly seen also in the culture; in the way we learn together, collaborate, and even understand each other. When using AI, for example, to capture the current generation’s knowledge and pass it on to the next generation, it frees time, energy and creates learning environment, where we don´t have to rely on a few employees’ memories.
Or when improving processes with AI, we are improving collaboration, like in this KLM case: “the airline KLM has started using A.I. to predict which checked-in passengers could miss their flights, sticking red tags on their luggage. That allowed baggage handlers to unload those bags quickly when necessary; pilots didn’t have to delay departures as much; and flight attendants didn’t have to pacify frustrated passengers. A.I. has helped align in real time KLM’s cross-functional teams on the tarmac.”
As AI can help companies to question their “strategic assumptions, using the technology to find performance drivers that they could not identify through experience or intuition”, it means AI does not only help us to do our current work more efficiently, but it also helps us to understand if we are doing the right thing in the first place: “To build and work effectively with A.I., employees have to question their core business principles and processes, asking themselves: What are we trying to achieve, how can we get there, and why is it important”.
What is, in my opinion, the most interesting cultural aspect with AI, however, is what could be the possibilities of tools such as word embedding in a cultural context. In this Berkeley study Amir Goldberg (Stanford University) and Sameer B. Srivastava (University of California, Berkeley), suggest that these word embedding models can, for example, help illuminate why some teams perform better than others, and which employees are more likely to identify with their organisation. Or help to assess the extent to which organisational members’ perceptions are shared and along what dimensions.
What if in the future, we could analyse the words people use when communicating with each other on a whole different level and even help members of the working community to tune their communications style so that it would build, and not break? Basically, it would help us in the how of communication. In the article, it is briefly explained how this word embedding would help us.
“With access to a sufficiently large set of training data, a word embedding algorithm gradually “learns” how group members communicate with one another. Word embedding models can reveal different facets of people’s perceptions without asking them directly what they think. Example: In one study, researchers used word embeddings to evaluate gender bias in language (Garg et al. 2018). They found that feminised professions such as “librarian” are closer in embedding space to the word “woman” than professions that are conventionally perceived as masculine, such as “carpenter.” The latter are much closer in space to the word “man.
So, they further explain: The many definitions of culture that analysts commonly use are unified in assuming that employees relate to beliefs and perceptions that are shared among a group of people, whether that group is a nation comprising hundreds of millions of people or a startup firm with only a few dozen employee.
So, what has been studied is the discursive diversity of the team members, based on the language they use. To what extent are team members aligned or divergent in their thinking? Even if there are obviously quite a lot of questions how to use these tools in the best way (one obvious factor being that I don´t think it is likely employees would just welcome bots and AI to everywhere in their communications), there are a lot of possibilities to dig deeper. One being this thing I call “How-Bot”. In the study it is described like this: “Imagine, for example, a conversational bot that occasionally asks, “Do you really want to send this message?” before one hits “send” on one’s email or instant message. “You may not have intended this, but your message might be interpreted as overly aggressive or hostile,” the bot might tell the user. Employed correctly, such bots may prove to be immensely useful in helping to foster psychologically safe and productive working environments.
To sum up: I think AI can have a substantial impact – not only on how we work – but on how our cultures will evolve in organisations. The biggest effects will be on how we communicate (and I don´t mean the tools, but the tone of voice), how we understand each other’s work, and how able we are to share the same context. Of course, it has to be kept in mind that there is a fine line between monitoring and oppression with these tools.
But still, I think it is worth asking what possibilities this could bring to company cultures: increasing inclusiveness, reducing future conflicts, and helping us all learn in a gentle way from each other’s thinking?