Disclaimer: We make no claims of ownership on this content. We just found a cool video on YouTube about an exeptation and thought you might be interested, too.
Disclaimer: We make no claims of ownership on this content. We just found a cool video on YouTube about an exeptation and thought you might be interested, too.
Disclaimer: We make no claims of ownership on this content. It’s just a cool story about an exaptation that we found on YouTube and thought you might like.
co-authored by Dr. Alicia Knoedler and Dave King
Dr. Alicia Knoedler: For the past 18 years, I have sought opportunities and means to advocate for researchers working to develop and accelerate their research programs. I had the very fortunate opportunity to meet Dave King in 2014 when he relocated his company to Oklahoma City. At that time I was the Associate Vice President of Research and leading a team called CRPDE. One of the core missions of this team was to assist researchers in their efforts to form teams and seeking resources and funding for those teams. Although it is challenging to form teams that work together well and pursue innovative work together, it was equally if not more challenging to demonstrate why a newly assembled team would be innovative.
When I found out that Dave had designed his company, Exaptive, to apply data analytics and data visualization tools to matters of team dynamics and innovation, I knew we had much to discuss and ideas to explore! We’ve been working together ever since, and this blog post is the first in a set of collaborative blog posts between the two of us.
In each post, we aim to take issues that I have experienced “in the trenches” of facilitating collaborative research and consider them through the technological lens that Dave brings. We hope you enjoy this series, and we encourage you to let us know if there’s a particular collaborative research challenge that you’d like us to tackle.
Teams that choose to pursue grant opportunities need to keep up with the trends if they want to secure dollars supporting their projects. Open access, FAIR data, and interdisciplinary collaboration are a few topics that have become more and more important to funders, whose priorities often reflect progressive social change and broad issues like inclusivity and accessibility.
As an example, considering various funding opportunities at the National Science Foundation (NSF), a federal funding agency with a long history of funding research teams and striving to advance groups of researchers to exceed their potential, here is a sampling of program expectations regarding teams:
I have worked with many teams. In the course of catalyzing and developing nascent teams, the processes to form teams, identify and include members, and establish team culture can be onerous. Creating representative ways to describe and visualize the teams as connected, integrated and cohesive groups of individuals working together can also be challenging. Unfortunately, in efforts to seek funding and resources for teams, the decision is often made to conservatively approach team visualization, using an organizational chart or similar hierarchical rendering to communicate roles and some relational pathways.
The exemplar solicitations above are challenging convention and attempting to expand the capabilities and expectations for research teams. But some of the requirements in these solicitations remain conventional. It is time for a change; not only in imagining how to visually represent teams but moving beyond visual relational images to technology platforms that serve to support the teams’ behaviors, dynamics, decision-making, collaborative work, and their potential to innovate together.
Let’s explore the NSF ERC solicitation in a bit more detail. The solicitation specifies that a successful proposal will delineate:
The accompanying narrative for the organization chart should define the functional roles and responsibilities of each leadership position, and how these positions support the integrated strategic plan described earlier. It should also define the functional purpose of any additional advisory bodies that are deemed necessary to support the four foundational components, accomplish the proposed ERC vision, and achieve the desired long-term societal impact. Note that the functional roles of the two mandated ERC Advisory Bodies, the Council of Deans and the Student Leadership Council, are defined earlier in the section on Community Feedback. Since the quality of team member interaction is critical to team effectiveness, describe the managerial processes overlaying the organization chart that will be used to integrate the team. Please provide sufficient detail to allow critical evaluation.
The NSF ERC solicitation suggests that an organizational chart is the way to go. It details what the organizational chart should demonstrate: functionality, relationships, and integration within the team. These are important features of the ERC teams but will a two-dimensional, non-interactive, hierarchical representation really align with what the Gen-4 ERC teams are expected to do?
The Gen-4 ERC program has been re-envisioned. So, let me invite Dave to share his perspective on how the representations of teams can be re-envisioned too.
Dave King: I don’t know nearly as much as Alicia when it comes to assembling academic researchers or applying for grants, but as a software architect and data scientist, I do know one thing for sure — a tree is not the same thing as a network. If you’re not sure what I mean by that, just check out this Dilbert cartoon:
The idea of printing out a website is laughable because a website is, well, a web. It’s a set of interconnected content that can be navigated in a multitude of different ways from a multitude of different directions. It’s not a tree. Sure, you can try to make it look like a tree by creating a “site map” like the image below:
There is some utility in being able to look at a website like that, because it helps you understand how a website is built. But it doesn’t really capture at all how a website is used. If you visualize the traffic through the different pages of a website, you don’t get a hierarchical tree, you get something like this:
The difference between those two visualizations and what they represent, how something is built vs. how it behaves, encapsulates exactly the problem with trying to convey the innovative trans-disciplinary nature of your research team through an org chart. The org chart might show how the team was built, but it doesn’t provide any insight into how the team will behave.
Like with a website, where it’s critical for a person to be able to jump around from page to page regardless of hierarchy, in research teams it’s critical for members to similarly jump from perspective to perspective and from expertise to expertise, because that sort of jumping around is exactly the process of ideation that makes trans-disciplinary teams so powerful, and different from multi-disciplinary teams. Just because you assemble a team that includes different expertise doesn’t guarantee that that expertise will be integrated together in synergistic ways. Said another way, it’s easy to build teams that have siloed experts within them (trees) but much harder to demonstrate teams that are true collaborations (networks), which is ultimately what organizations like the NSF want to fund.
The good news is that when it comes to building networks, and visualizing them, technology can really help. There are an increasing number of tools for network visualization. Many of these tools are for technical developers, but the barrier to entry is getting lower and lower as it is becoming increasingly recognized that non-programmers have reasons to visualize networks too. It is now fairly easy to take an excel file and transform it into a network. For example, I made a spreadsheet of the two-person team comprised of me and Alicia, and some of our expertise and focus areas:
Then I used an Exaptive xap (pronounced zap) to turn it into this network:
Stephen Johnson, author of the book Where Good Ideas Come From gave afantastic TED talk about how ideas are networks. My hope is that as technology makes it easier for more people to define networks, visualize networks, and quantify the dynamics of networks, they will be able to start providing organizations like the NSF not org charts that ignore the inter-relationships within a team, but network charts that illuminate those overlaps. It’s only through that sort of visualization that a team’s true potential for innovation can be conveyed.
Once you start looking at teams from a network perspective, you open up new opportunities for characterizing teams. In the Harvard Business Review article “Better People Analytics”, Paul Leonardi and Noshir Contractor look at how different team network structures may offer different advantages for different situations, developing a set of team “signatures”. The figure on the left shows two signatures covered in the article, one better for innovation, one better for getting projects done on time!
These signatures are important because they open the door for us to be able to do more than just visualize teams — they allow us to start to quantify some metrics of team performance.
I’m particularly interested in a team’s capacity for cognitive exaptation — the application of an idea in a different context than the one in which it originally emerged — so I’ve focused my company on creating a platform that can help find teams more likely to exapt tools and techniques across fields. The screenshot below shows our analysis of the possible 3-person scientific teams that could be assembled from 143 scientists associated with a large pharmaceutical company’s research program. There were over half a millionpossible 3-person teams that could be assembled from just that small pool of researchers, but by modeling each possibility as a network, and then applying some analysis on each network’s signature, we could organize all those possibilities along a meaningful axis and allow for the interactive inspection of the different structure along that axis:
Once we start talking about applying algorithms to “quantify” things, we start getting into the dangerous territory of thinking that we might be able to use algorithms to replace human decision making. Even though I’m a technologist and data-scientist, I’m not a fan of that approach. Don’t get me wrong, I think there are plenty of areas where computers can probably do a better job than humans making certain types of decisions, but I don’t think that designing innovative teams is one of them.
There are so many subtle parameters involved in high-performance teams and their interpersonal dynamics that a computer has no hope of having the complete picture. The computer, therefore, has to be used as just a tool in a human’s process. It’s important for that process to be a collaborative back-and-forth between what the computer can analyze and what the human can intuit. Computers can help separate some of the signal from the noise, but humans have to be able to iterate based on what the computer is helping them see, adding their own knowledge and experience into the mix, steering the computer in the most fruitful directions. Steve Jobs used to say that a computer is like a bicycle for our minds. In the case of team-building, I think a computer is like a flashlight for team-leads and facilitators.
That’s where people like Alicia come in. Research facilitators know where to shine that flashlight and how to take what the computer is telling them and combine it with all the things the computer doesn’t know. They know how to take the visualizations that a computer can produce and place them into the broader context of a proposal for innovative research. I’m excited about giving those people better tools to do their job. There is an increasing proliferation of research profiling systems that help organizations capture and harvest all the information they have about their researchers, publications, grants, focus areas, etc. In the effort to build those catalogs, it’s important not to forget what all those data are for — being able to assemble teams that can do innovative research, and being able to convey to sponsors why those teams are worth funding.
Dr. Alicia Knoedler: We started this conversation by suggesting that providing more insightful visualizations for research teams is among the many opportunities for innovation in research programs. The technology is readily available and becoming easier to utilize. Dave and I are both excited by the prospect provided by technology to team facilitators to enhance their powers of observing, noticing, and catalyzing behaviors within research teams. But we also want to promote these ideas within funding agencies, research team sponsors, and to the reviewers who evaluate proposals. There is opportunity to open up requirements within funding programs for new ways to show team compositions, illuminating team structures and functions, the dynamics and roles of team members, and the proposed behaviors that will lead to competitive research outcomes.
We know that investigators submitting proposals for research funding know to follow the solicitation instructions to the letter. But if the solicitations are requiring conventional, traditional information for team descriptions, it is difficult for research teams to propose something really new. We think the accessibility of these new tools and technologies and will inspire others to push beyond the conventional approaches to suggest that visualizing research teams can be just as innovative as the research they are proposing.
I recently moved from Boston to Oklahoma City. My wife got offered a tenure-track position at the University of Oklahoma, which was too good an opportunity for her career for us to pass up. Prior to the move, I had done a lot of traveling in the US, but almost exclusively on the coasts, so I didn’t know what living in the southern Midwest would bring, and I was a bit trepidatious. It has turned out to be a fantastic move. There is a thriving high-tech startup culture here. I’ve been able to hire some great talent out of the University, and we’re now planning to build up a big Exaptive home office here. Even more important, I was delighted to find a state that was extremely focused on fostering creativity and innovation. In fact, the World Creativity Forum is being hosted here this week, and I was asked to give a talk about innovation. As I thought about what I wanted to say, I found myself thinking about . . . cowboys.
Before I came to Oklahoma the image I had in my mind was pretty much like this:
And while there is certainly a cowboy culture here, I have yet to see anyone in downtown Oklahoma City ride off on a horse into the sunset. This Marlboro Man image is not the Oklahoma of today, but nevertheless it’s an image that persists, quite powerfully, in our collective imagination.
The more I thought about my new home in OKC, and the mythology of the Marlboro Man, the more I thought about the myth of the lone genius. We love the idea of the lone genius. Perhaps the desire to think of innovators this way is particularly American? Perhaps it comes from our history and our ethos, born from the enviable rugged independence of the frontiersman and reinforced in the next generations by endless cigarette ads? We often use Thomas Edison as the example of the quintessential inventor, and when I did a Google image search, I was quite struck by the pictures I found, like this one:
The above picture of Edison strikes me as the intellectual equivalent of the Marlboro Man image. We know this picture of Edison isn’t the whole story. We know he had a huge team of people working under him in his lab, all of them conspicuously absent from the picture above. We know he performed in-depth research on other inventors’ work, like that of Joseph Swan, who had invented a less practical but functional light bulb years before. Swan’s picture is yet another Marlboro Man lone inventor image:
I think that in order to push the discourse about innovation ahead, we need to find new ways to describe and exalt our inventors. Indeed, Edison was a genius, as was Swan, but their genius lay in how they synthesized previous work and connected together the contributions of others, adding their own contributions as was needed to fill in the gaps.
It’s a challenge to capture, explain, and promote this view of innovation and innovators. Increasingly we are recognizing that ideas are networks, but this recognition gives us the challenge of figuring out how to represent them. The field of data visualization has had a notoriously hard time figuring out how to display complex networks. They so frequently end up looking like a big mess that the term “hairball” has become a technical term in the field, for reasons the network below makes obvious.
If we are to push the discourse about innovation forward, we need to find ways to visualize idea networks in ways that are just as striking and inspirational as the portraits of Edison and Swan.
This image, created by London-based group Social Physics, shows the co-authorship of papers by researchers studying Hepatitis C. It’s still a bit of a hairball but certainly a beautiful one.
A while back we used Exaptive to analyze the hundreds of different data visualization projects that were collected on the well-known site VisualComplexity, and to draw a network of the contributions based on their field of study and author:
Once we did so, data visualization guru Ben Fry was clearly revealed as a hub in the network, his many projects spanning many disciplines. Which is a better picture of Ben the innovator, the one above where he is a hub connecting many disciplines or the one below in the style we are more accustomed to?
As a photographer, I love portraits like the one above. As a student of innovation, I have come to love network diagrams much more. As a software architect, when I sat down to design the Exaptive platform, it was clear to me that it needed to be inherently based on a network structure. To represent that underlying structure, we decided to use a dataflow programming paradigm within our development environment:
All programming is modular. All programmers build programs leveraging libraries of functions built by others, but the act of writing a program often obfuscates these connections, much like the act of invention often obfuscates all the prior art that made it possible. When we designed Exaptive we wanted to allow people to build new things, but we wanted to do so within a framework that preserved the network of how they did so. We believed that the network was the idea itself and equally important as any particular output. We also believed that the most important things behind each of those code modules was the person who wrote them and the person who used them. We believed that any system that was making it easier to connect code together should make it easier to connect the people behind that code together too. We recently had an exciting project show us just how important this is when it comes to generating new and novel solutions.
A few weeks ago we started work on an interesting data science project. Our contact at a large NGO had a bunch of longitudinal data about every country on Earth, and wanted help trying to group the countries together by these data. Matt Coatney, a data scientist who was using the Exaptive platform to experiment with neural networks on time-series data, decided to take on the challenge. He started experimenting with converting the country time-series curves into alphabetic sequences. He was thinking of DNA and hoping that once he had alphabetic sequences he could use DNA clustering algorithms to cluster the countries like they were different genetic codes. Exaptive’s Science Advisor, David Merberg, a PhD cell biologist, got alerted to the work Matt was doing. When he saw the letter strings coming out of Matt’s algorithm, he remarked how they were more like proteins than DNA and perhaps better suited to protein clustering techniques. This led Matt to push his algorithm in a new direction, ultimately producing a novel way of clustering global country data based on the techniques that David clued him in to from genetics. It was innovation in action, and if I were to draw a picture of it, it would look like this:
This to me is what true invention looks like. This project involved a great exaptation, taking something from one field and applying it in another in a new way. It involved three different people, from three different disciplines, living in three different states, and it all got done in three weeks. It’s a messier picture than the nice neat portrait of Thomas Edison standing proudly alone in his lab, but I think these are exactly the sorts of pictures we need to get better at learning how to communicate. We need to better learn how to value the messy interconnected hairballs of idea networks and give them just as much swagger as those cowboys riding off into the sunset.
Author – Stephen Arra, Developer
The Einstellung effect is a psychological phenomenon that changes the way we all come to solutions and impedes innovation.
Every day we solve problems – from choosing the quickest way to work, to how we’re going to fix a problem for that one client. How do we know if our solutions are any good? What if there is a much better solution that we haven’t thought of yet?
I recently came across a cover letter where someone said, “Every solution comes to me eventually”. This struck me as a strange thing to say. We don’t have visibility to every solution; we all have unknown unknowns. But even further, known knowns may not even make a connection for a certain problem. The Einstellung effect may occur, preventing us from considering all the available solutions.
The Einstellung effect occurs where preexisting knowledge impedes one’s ability to reach an optimal solution. We become unable to consider other solutions when we think we already have a one, even though it may not be accurate or optimal. It leaves us cognitively incapable of differentiating previous experience with the current problem. So we may solve a problem but we don’t actually innovate.
Einstellung is a German word that translates to setting, mindset, or attitude. The brain attempts to work efficiently by referring to past solutions without giving the current problem much though. It’s stuck in a mindset. We apply previous methods to a seemingly similar problem instead of evaluating the problem on its own terms. This effect presents across disciplines and skill levels. Whether or not we know it, we all experience it.
The classic experiment used to validate this effect was conducted by Abraham Luchins in 1942 – the water jar problem.
Participants were separated into two groups, one given a few priming questions before the core question. The priming questions led the focus of the first group to a particular method of solving the solution. When presented with the core problem, one that couldn’t be solved with the same technique, they were unable to solve it. The participants in the second group, on the other hand, were asked the same core question without the primer and, more often than not, were able to find the optimal solution. (You can find the problem here.Try it yourself!)
Another experiment involved analyzing chess players and their eye movements on the board. The participants were again split into two groups, the first group with a suboptimal solution on the board along with an optimal solution and the other with just the optimal solution. The group with the suboptimal solutions continued to look at squares relating to the found solution even though they mentioned they were actively looking for a better one. Their eyes became fixated on the known solution. The Einstellung effect prevented them from viewing the board with an unbiased view even though they were intentionally trying to do so.
This effect suggests that once we gain experience, the more likely we are to fall trap to its influence and fail to evaluate each problem for its merits. We need to ask what the fundamental difference with this problem is and evaluate each new problem without bias. Prevent our brains from going on a mechanized state of autopilot. It’s not a lack of knowledge that leads to these errors but initial ideas formed from previous experience.
Data science is an emerging field where new technologies and methods emerge what seems like every day. But be wary, as trending methods may cloud our judgement. These new tools and ideas can be like shiny objects, where we can’t look away even if it’s not the right tool for our problem, such as the use of tools like Hadoop and NoSQL for the sake of using something trendy or ‘big data’ associated. Rather than leveraging a smaller dataset, we jump into an ocean of unexploited data without adequate reasoning or preparation. Or there’s approaching a problem by blindly throwing the trending algorithm of the day at it. (Recurrent Neural Networks and Random Forests are all the rage these days.) This can lead to solution blindness, especially when intelligence is added too early in the process. Sometimes we form our problem around the solution rather than the other way around.
The Einstellung effect also presents itself in the context of confirmation bias, where we ignore results that don’t support our initial representation of the model or hypothesis. Feature and model selection need to reflect an accurate depiction of the data. Exploratory data analysis is a critical stage in data science that is often overlooked. We need to explore and visualize the data in various ways to dispel preconceived notions before going toward solutions.
“Good is the enemy of great.” – Voltaire
Although a bit of an extreme case, this problem is synonymous with the philosophy of JK Simmon’s character in the recent movie Whiplash: “There are no two words in the English language more harmful than ‘good job.'” One becomes content with local maxima rather than the absolute maximum.
(Image source: http://www.rogerebert.com/reviews/whiplash-2014)
Our brains are sabotaging our ability to come up with new ideas! What can we do about it? Break the pattern.
Usually when we think of geniuses, they are the people with a large working memory. They are able to process more at a single point in time. However the working memory, the prefrontal cortex, can block other memories from creating a connection, which consequently prevents creative thinking. A well-known creative process looks something like this:
The third point is key in coming up with novel solutions and bypassing the Einstellung effect. Taking your mind off the task at hand for a while effectively activates the cerebral cortex and gets you out of the working memory to explore new ideas and connections.
Similar to distraction, interleaving is the technique by which one switches between ongoing tasks to improve memory, retention, and learning. It allows a topic to percolate in your mind and extract the general rules. This is not to be confused with multitasking. It could similarly result in a loss of productivity since switching between projects and modes of thinking can be time consuming. But the added benefit of jumping in and out of a problem can greatly outweigh the time required, if it leads to a better solution. Being flexible and allowing yourself to explore paths that don’t necessarily look promising from the start are great ways to allow your mind to discover new dimensions of a problem.
Collaboration, getting different perspectives, is a great method to break out of a rut. An approach that I like is to have multiple people work on an initial concept separately then convene with their findings and explore each other’s unbiased ideas. If a solution is presented too early, it can cause the others to suffer from the Einstellung effect.
The field of data science lacks meaningful collaboration tools. Data science encompasses a large domain of knowledge and many times requires more than one perspective. There are competitions like Kaggle where people can work together on a project, but a meaningful collaboration tool would not only allow data scientists to work with each other on a dataset but track decisions made in the process. Visualization redesigns for example could greatly benefit from this. Edward Tufte’s redesign of the challenger data effectively shows the desired result in hindsight. But his knowledge of the outcome rather than the decisions made in the process leads to an unfair critique. Most of the data is left out to highlight the major data point that caused the disaster.
In a production environment sometimes good enough really is good enough. It may not be worth the extra effort to get the best solution. It may not even be possible with the current technology. Marginal benefits may not be worth the time it takes to reach a better solution. The key is in knowing the tradeoffs and when to explore. Recurring problems are the best candidate for exploring if there is a better solution, when it could be just around the corner.
A Cognitive Network
At Exaptive, one of the things we are striving to facilitate is a better method for discovering novel innovations around data. We want to eradicate the Einstellung effect in our field, and eliminate any associate efficiency loss to boot. (Hey, it’s good to have lofty goals.) We believe something along the lines of a suggestion engine is what data practitioners of all kinds need. Except, in addition to suggesting new approaches, the right suggestion engine would reveal the potential collaborators who designed those approaches. What we like to call a cognitive network – a concept that deserves a post of its own – allows people to explore data in various ways with a diverse set of collaborators and suggests different ways to think about a problem.
At its core, a cognitive network is focused on connections, connections that wouldn’t be made if the Einstellung effect has anything to do with it. Connections are what allow us to know when something fits for a particular application, where to apply a technique, or to translate a concept to another area or field of study. They are the glue that hold together pieces of information by use and meaning. The connections then are crucial to innovation, and missing one is detrimental.
We should set aside some time to determine if we are settling for a known, good enough solution or we are evaluating the problem with clear eyes and see all solutions. At the end of the day, we may still not consider every solution. However, being mindful of the Einstellung effect and open to new approaches even if an apparent solution has presented itself will aid in reaching those solutions just outside our conventional way of thinking and lead to innovations.
“Good ideas are getting harder to find,” Exaptive CEO Dave King quotes a recent paper by MIT and Stanford researchers. He points to the skyrocketing number of researchers employed in the U.S. and contrasts it with the inverse slope on a chart monitoring efficiency of researchers along the same timeline. “Those growing number of researchers are failing to produce value that outpaces what we’re spending to innovate.”
“We’ve got to fix this for the benefit of our society,” Dave adds.
Collaboration is a huge part of research. We’ve seen an erosion of the myth of the lone genius, and many grants now have stipulations requiring collaboration. Collaboration and team formation take time. How can we make it easier to get disparate groups of people to innovate around a specific problem? Can a consistent approach be applied so that it happens more quickly in a variety of scenarios?
Exaptive team member Shannan Callies organizes researcher-centric networking events that make everyone feel welcome and included. Effort is required for those outcomes because, as Shannan quips, “Networking is awkward and humans don’t like it.” Shannan has overseen events where hundreds of PhDs, researchers, clinicians, and other scientists gathered around a specific purpose. She says, “One of the greatest challenges is making sure the right people get connected.”
Dr. Alicia Knoedler, Executive Associate Vice President for Research at OU, sees the same issue and highlights how complex research development can be: “Funding agencies and other groups that are interested in solving problems are interested in a more multi- and interdisciplinary type of team. Where it’s not just ‘here’s a problem and here’s a chemist trying to solve the problem.’ It’s ‘here’s a problem’ and there’s people from art, chemistry, engineering and psychology, all on a team together to address this problem in very unique ways.”
Alicia continues, “You have language issues, terminology issues, methodological issues. People have different expectations. Maybe they have a different way of expecting the pace of research to occur. [There’s a difference between] all the researchers thinking independently about the problem and then coming together to figure it out, versus ‘what is interesting about this problem when we all think about it together?’ And then, ‘let’s figure out a different way of approaching it together.’”
Dave believes there’s an analogy between the productive friction of a city and finding cohesive, productive teams. Put a million people or so in a small area, and you’ll get some friction. Some of it is unproductive, like packed subways. Some of it is very productive, like when people meet at conferences or in a building’s elevator. (Another example: Exaptive hosts Data + Creativity, a meetup in Oklahoma City where we have lively and thoughtful discussions with people from different industries and types of work about how to use data for good.)
As an engineer, Dave built Exaptive to facilitate innovation intentionally, not by chance. “The right sort of data used to create the right sort of friction can create insight in just a 30-minute workshop with nothing more than paper and pens,” he says. Exaptive’s Cognitive City, a virtual space for organizing and tracking the progress of collaboration, “takes that concept and scales it beyond the limitations of physical space.”
Members of the National Organization of Research Development Professionals are familiar with the challenges of bringing collaborative teams together. NORDP holds conferences every year for people involved in research development. Alicia Knoedler was one of the founders and was president in 2013 and 2014. “There were about 60 of us who had been collectively identified by a single person who was full-time at Northwestern at the time. Holly Falk-Krzesinski [, the founding president of NORDP, works full-time for Elsevier now and still teaches at Northwestern]. She very astutely noticed that there were a lot of people doing this work that she was doing at Northwestern, but there wasn’t a collective name for it. So she basically just called a lot of people and said, ‘This is what I do, what do you do?’”
A community soon formed around people who found themselves to be doing the same kinds of work and very quickly they were learning best practices from each other when previously they hadn’t known there were so many other people with similar jobs. The group realized they needed to formalize an organization so they could “reinforce one another and train and get involved and find more people.”
When asked if members join NORDP to improve at networking, Holly notes, “Actually, NORDP members tend to be great at networking—a hallmark of successful research development professionals. NORDP members connect researchers across disciplines and institutions, connect internal expertise with external resources such as grant funding, and connect with one another. Another strength of the NORDP membership and a distinguishing feature of the organization is the large number of people with strong research backgrounds, including many with advanced degrees across a range of disciplines. Research development professionals take seriously the good stewardship of science and tend to be a very service-oriented group.”
The benefits of having a peer group have been tremendous. Holly says, “It’s the most collegial organization I’ve ever been a part of! Everyone is so generous with sharing their experiences and ideas… That may seem counterintuitive since often our investigators and institutions may be competing against one another for grants, but we’ve found that sharing information and helping one another really just makes the pie bigger for everyone.”
The forming of NORDP is a great example of a group that came together based on work they had done in the past. At Exaptive, we’ve noticed when two people are prepared with information about the qualities they bring to a project — what they have in common and what their differences are — they can quickly have a productive conversation to solve problems that can impact the future of their work. We call this artifact-based collaboration. The artifacts can be anything created by humans: articles, tools, techniques, hypotheses, datasets. (Example: “This collaboration with this person is suggested for you because they are looking at biomarkers and you are researching drug efficacy, and the data you’re both using have similar structures.”)
It’s a departure from attribute-based collaboration, which is something like LinkedIn or other social media. (Example: “This group of accountants is recommended for you because you say you are an accountant.”) Attributes can characterize people or artifacts, but only people can create artifacts. Social media uses algorithms to make suggestions based on commonalities. If you liked X and didn’t like Y, and X is very similar to Z, Z is more likely to show up in your social media news feed. When we use an algorithm that accounts for attributes andartifacts, we can make recommendations based on complementary differences. Exaptive’s algorithm isn’t just looking at how potential collaborators are alike. It’s also weighing their work history and the different perspectives they have on a shared issue.
Perspective plays an important role in building teams, and can greatly affect the amount of time it takes for a team to form, storm, norm, and perform. Alicia notes, “You can have a team that’s been working together for years and introduce one person and it changes the dynamic completely. There are also several ways in which people never have access to join a team, and it takes a special, concerted effort to reach out and understand where the gaps are and understand ‘if we only had somebody who could do this, or understood this, that would change the way in which we are approaching this.’ And it takes a lot of ego-less people to realize there’s a gap and it needs to be filled by somebody who isn’t already in the room.”
Building the Research Team with a Dynamic Collaboration Engine
A dynamic collaboration engine uses an algorithm that deploys the artifact-based collaboration method to suggest optimal teams. Recently we had the opportunity to set up a Cognitive City for a group of about 150 doctors, researchers, and clinicians, who participated in a one-year, 20- to 35-person cohort in a program that ran for seven years. The participants have different specialties, researched in studies with diverse regional and topical foci, and they currently live all over the world. The director of the program was interested in finding the best three-person team to solve a health problem in a specific region.
If you were to create a three-person team from a group of 143 individuals, you’d have 477,191 options to choose from. (Sounds like a fun afternoon.) Even if everyone were in the same gigantic room, it would be almost impossible to find the best team by trial and error. It would be completely by chance if the best team were found at all. With our algorithms looking through the connections for similarities and – more importantly – complementary differences, the number of suggested teams was reduced by more than 99.8%. Tweak the algorithms to find the best three-person team that have a connection with a certain region, or a core competency, and the number of potential teams quickly resolves to a few optimal choices. Six, in this case. (Your afternoon just opened up!)
How do you score a team for its promise to be collaborative? We call it a group’s ‘exaptation potential.’ It’s calculated using artifacts, attributes of the people, and attributes of the artifacts. Exaptive data scientist Frank Evans explains how to measure exaptation potential: “The Cognitive City creates a matrix that takes into account attributes. Both attributes of artifacts and attributes for each individual researcher, analyst, scientist, administrator. The algorithms balance the similarities for the whole team. So while the individuals may be connected by some things they have in common, they’re suggested for a team because their differences complement the strengths of the other members within the context of the common purpose.”
Dave points out the difference between a team with a low exaptation potential and a high exaptation potential. “Having too little in common leads to a low score, but having too much in common also leads to a low score,” Dave says. “The key to innovation is not just to maximize overlap. It’s to have the right commonality combined with the right amount of complementary differences.”
Removing some of the time-consuming barriers to building a great team means the team can begin to solve a problem faster. When a team gets started faster, it’s an opportunity for collaboration and innovation to happen sooner. Endless possibilities branch out from there.
We’ve been exploring ‘Faster Is Different’ as it applies to facilitating research and innovation in general. Get more context on our perspective here. And here is an additional article about how science makes innovation faster and different:
Author – Frank Evans, Data Scientist
Everything is derivative. Take advantage of that. “New” ideas are the next step in an extensive network of existing people and ideas. If we can get the data and reconstruct the network, we can analyze it and understand where branches of a network have the potential for innovation.
Great ideas do not need to be created. They can be discovered.
An astronomer starts chatting with a neuroscientist about the sheer amount of radio telescope data that neither he nor any of his colleagues know what do with. Through their conversation they discover that million-mile “slices” of space across hundreds of light-years is surprisingly similar to millimeter slice data of the neurons of the human brain. So why not use an existing MRI visualization to look at telescope data?
Though imperfect, it gives the astronomer the ability to “fly around” the supernova event the same way a surgeon can “walk around” the patient’s brain scans before operating.
This is the famous Pillars of Creation nebula. Through MRI, astronomers learned that they were not pillars at all and acquired a better understanding of the velocity of galactic movement.
The outcome of this encounter was to launch a greater endeavor within the university toward inter-disciplinary study between astronomy and medicine, and in short order a familiar thing happened.
This is a cardiology map of arteries surrounding the heart showing thickness and pliability by color.
Medical researchers wanted to understand relationships between the branches of the arteries. They used ideas that came from multi-galactic astronomy meant to differentiate boundaries between multiple merging galaxies to create a new visualization. This new approach increased diagnosis rates of a particular disease from 39% to 62%.
But that idea came from geneticists, who used the technique to understand how different gene expression regions interrelate. And that idea came from evolutionary biology — Darwin’s thinking about the interrelationships of speciation. From cardiology connecting back to astronomy, genetics, and evolutionary biology, they were solving different problems but not entirely, and they needed an understanding of where their problems overlapped.
This is not an isolated incident. The world is taking notice that ideas are networks, and that innovations often involve adapting ideas from adjacent domains.
This chain of insight I just described was fortuitous. The people involved stumbled over some impactful aha-moments by literally stumbling into each other coincidentally. This phenomenon, however, can be orchestrated.
What’s required is to disassemble and rebuild networks of people, their interests, and their innovations into a visible network. Then we can analyze that network and mine it for the fundamental elements of those innovations. We can do this ahead of time, instead of just enjoying serendipity as an anecdote.
There are three basic elements that we’ve found we must unpack to capture the innovation potential of a network: the people, their attributes and the artifacts of their work. Attributes are what we call a person’s expertise, experience, affiliations, and other characteristics. It’s important to differentiate people and their attributes from artifacts of actual work, like a data set, an algorithm, or a particular analytical approach, for example. And you need to understand all three to really take advantage of a network.
You won’t get far just building a network of artifacts. New ideas don’t exist without the human minds that interpret and apply them. We’ve yet to find that humans can be eliminated from the recipe for innovation.
You’ve also got to know who those people are — their attributes. Otherwise, you’re assembling people at random, without any hints at how they might relate to each other.
Without the artifacts, though, you’ve just got a group of people networking. They’re meandering through conversation hoping for an exciting realization. Artifacts are the basis for spotting an actual breakthrough and pursuing it.
Here’s one visualization of such a network. There are people with areas of expertise, and there are posters, representing the actual work they are doing – the techniques they are using and the data involved.
Building an analyzable structure of people, their interests, and their work product, all as first-class citizens, transcends a mere social network and becomes what we refer to as a cognitive network. In a cognitive network, researchers and technologists find untapped ways to collaborate around related work.
Too much overlap isn’t disruptive enough. Two astronomers could never have innovated the way the neuroscientist and astronomer did. They’d more likely just continue having the same problem, never questioning its seeming inevitability.
Too little overlap is chaotic. Put a data scientist like me in conversation with a lawyer, with no context as to what we might have in common, and I’ll just end up telling lawyer jokes. The likelihood of insight is too low without some critical mass of overlap.
We conducted this kind of project with the Gates Foundation’s Next Generation Scientist program. They bring together researchers from all over the world to work on a number of medical issues in a 1-year fellowship. We helped them take nearly a decade of their fellows and assemble a cognitive network. The goal was to predict, with greater certainty than pure chance, how teams could come together to more effectively solve specific problems.
Out of 477,191 possible combinations, we were able to identify two teams with nearly ideal balances of similarity and difference. In this example, there are three people (blue dots) with certain attributes in common and some not (yellow dots). There are also artifacts of their work (grey dots) that have some overlap with each other and that bring some unique perspective.
What we’ve learned since then and built into our approach is how to enable this approach on a larger scale. If we only find potential for innovation here and there it’s not that much different than a fortuitous conversation. We want to facilitate innovation systemically.
The major elements of a system that facilitates innovation are metadata, modularity, fluidity, and scale.
Metadata is how we come to understand where overlap can lead to innovation. The common characteristic of having branches that allowed visualization techniques from multiple fields to lead to improved diagnosis of heart conditions. An innovation system must repeatedly analyze and apply that meta-perspective to understand where the overlap is subject to existing in the first place.
Modularity is the need to break artifacts down into elements that can work together. It was a lucky and rare event that an MRI machine could ingest astronomical data. Innovation systems need to facilitate that kind of interoperability intentionally and widely.
Fluidity refers to the ease with which members of a community can experiment with potentially innovative ideas. Systems can stack the odds with data and suggestions, but humans have to experiment to find the new ideas that are actually good. If it’s too painful to experiment, we’ll never try anything like jamming some astronomical data into an MRI machine.
Finally, scale is necessary to achieve a higher likelihood of innovation. More people, attributes, and artifacts increase the breadth and depth of metadata that reveals potential innovation.
I am a data scientist and an engineer. I’m trained to seek economy. And as I see it, all of this effort is actually done in the name of creating and innovating less, ironically. Stop being so creative! The solution to your problem, or key elements of it, has probably already been solved in some other form. There’s a way to realize what ideas you can rely on and translate to your field, before you reinvent them.
If your job is to get your company, team, or community to innovate, you know how organizational forces can make it hard to even try something new. Visualizing the resources available is an effective first step in overcoming some of those organizational forces. Simply being able to see, and show, what you have allows you to make a compelling case for marshaling resources and even spark some initial interactions in that direction.
The Challenge: Moving Heaven and Earth
Within your organization, processes and structure already exist for doing things, and these can sometimes make it hard to try new things. People lose sight of each other, of what each other is doing, and how they might have worked together. The thought of trying new approaches is painful and sometimes demoralizing. As a result, successful teams may even stagnate.
So innovation managers, research development professionals, and team leaders of all kinds are forced into a battle with organizational inertia. No one can move what feels like a large celestial body or rearrange a solar system.
The Solution: Returning to Primordial Soup
Despite these impeding forces, there are ways that you can visualize the component parts of your teams and the relationships between them, as if the forces didn’t exist. By returning to your community’s primordial soup, you can reimagine and explain how people and their work might collide and recombine to create new things.
Instead of looking at the data you have in lists or tables, network visualizations can uncover relationships you never knew existed. Just seeing the resources available outside of their current structure sets your mind and your audience’s minds free to imagine what could be. You can form cogent notions for why new teams should be formed, where untapped ideas might live, and how novel approaches can be pursued. In fact, they almost pop out of the visualization at you.
An Example in Medical Research
The Osher Center for Integrative Medicine, a collaboration between Brigham & Women’s Hospital and Harvard Medical School, strives to be a “Center without Walls.” Facilitating collaborations among its many researchers and clinical practitioners is critical to its mission of enhancing human health, resilience, and quality of life.
Osher decided to feature on its website maps of connections between individuals and across institutions in both research and clinical practice.
Just by visualizing their community of researchers and their affiliations, Osher revealed the myriad of collaborations happening across disciplines and institutions.
But they didn’t stop there. They added the researchers’ work product, their publications, to the data.
This illuminated more untapped value. It showed coauthorship, which indicated where in the network map, i.e. who, was a hub for research. They saw the thought leaders in their network and with whom they are already collaborating.
Cross-referencing that with specializations, Osher could see who in their community had already crossed domains and who their collaborators are.
One of my favorite sayings is: “If you want something different to happen, you have to do things differently.” Too many of us look at this kind of information in lists or tables. Get modern and you will see new things. It starts with the same data as org charts and personnel directories, but it’s represented differently to expose thought leadership and collaboration.