A.I. and the Data Revolution Series Kickoff

Writeup prepared by Joe Iagulli, CFA

On November 8, 2017, the Fintech Thought Leadership Group of CFA Society New York chaired by Carole Crawford, CFA presented the first instalment of: A.I. and the Data Revolution Series. The event was sold out for attendees as well as a record number of participants watching on livestream. Attendees were able to gain an understanding of current developments in AI and other methods of analysis; gain an understanding of areas in the financial services sector where these innovations and developments are having an impact; panelists also discussed best practices in adopting and incorporating data driven developments into their respective practice.

The keynote provided a presentation on the overview of Artificial Intelligence applications, which was delivered by Jack Kokko, CEO/Founder of AlphaSense, an intelligent search engine software for financial services analysts.

Artificial Intelligence (“AI”) is currently embedded in numerous industries and many people aren’t aware of the impacts from this emerging technology. People might associate AI with the industrial robots in the manufacturing sector, which are optimized for single functions along an assembly line. These machines are more efficient than humans at these distinct tasks, but are limited by certain boundaries. AI algorithms (“algos”) are on a similar path right now, excellent at certain tasks like playing video games, identifying relevant information within documents or generating statistical reports, but the current range of abilities for these algos is still limited.

Machines and algos have immense scalability at accomplishing single tasks and provide consistent execution, but they lack creativity and can only deliver outputs defined by parameters that are set by programmers. Humans possess creative abilities and can assess the quality and relevance of information, but they lack the capacity for many tasks and can be overly influenced by biases. The intended impact of AI applications is to augment the processes of human workers and help people evaluate multiple sources of information, while they spend less time on gathering data and manual processes. AI provides clarity and helps with information overload, helping to process the vast amount of data that is now available.

AI software utilizes Natural Language Processing (“NLP”) algos that can scan numbers and text within financial statements, research papers, and news publications to find the relevant patterns. These algos will perform context analysis & tag the most applicable materials, which can be missed by analysts given the exponential growth in available information.

Additionally, AI software can include “sentiment” algos that can read phrases and determine if the overall message conveyed is positive, neutral, or negative. A sentiment algo was tested using the earnings commentaries over the last 10 years for constituents of S&P 500 index. This sentiment indicator turned out to be highly correlated with the price movements of the index back-tested data.

AlphaSense software functions include idea generation (examine sentiment inflection points), risk management (alerts on inflection points), quantitative factor inputs, and determining relevant analysis (which documents to read). The software doesn’t favor either quantitative or fundamental analysis processes, it’s mainly intended for a new class of “augmented” analysts.

The keynote was followed by a panel of speakers included Ulrike Zeilberger, Executive Consultant at Watson Center of Competence, IBM, Alexey Loganchuk, Founder of Upgrade Capital a recruiting firm for technology and investment professionals, Edward Oliver, Director of Finance Sales for Dataminr, a real-time information discovery company, and Mutisya Ndunda, CEO and Founder, Alpha Vertex an AI enabled research firm for investment professionals. The panel was moderated by Matei Zantrenu, Founder, System 2, a data analysis consulting firm for investment professionals.

Unstructured data sets are the main focus for the capabilities and features of Artificial Intelligence (“AI”) applications. Included below are three examples of its application.

  • Analyzing Imagery: AI has trouble identifying imagery for particular things (popular reference: “what is a cat”) but can be applied in certain settings to produce quantitative data. Satellite imagery has been very impactful with some examples listed below.
    • Photos of various shopping malls and downtown areas can count the number of cars in a parking lot, pair with location data, and track shopping patterns over a period of time.
    • US government can track crop yields on farmland in real time to determine estimated supply and anticipate any potential impacts.
    • Some oil tanks may contain floating lids which sink as oil is depleted, producing a sun-cast shadow that can be tracked to determine estimated capacity.
  • Cell-Phone applications: Data available from iPhone and Android users can include the following.
    • The number of application downloads
    • The time spent within the applications
    • Items viewed and purchased on shopping applications

These data sets are more indicative of consumer behavior, help assess real-time trends, and are helping advertisers be more efficient in their targeted advertisements.

  • Social Media posts: With Natural Language Processing, AI applications can determine the sentiment of posts within Twitter, Facebook, news and blogging websites.
    • Quality checks include crowdsourcing posts with similar sentiment, giving more weight to reputable or well-known contributors, and disregarding posts that are created by “bots” or “trolls”.
    • Social Media applications can effectively self-regulate, as users can crowd-source the verification and detection of “fake news”. AI will typically ignore single data points or posts and algorithms (“algos”) are getting better at removing irrelevant information. Bots typically have patterns in social media posts, applications are learning to detect these and discard irrelevant info.
    • Some examples of trends in social media that outpaced major news outlets include: Catalonia referendum, North Korea missile launches, terrorist attacks, and Chipotle e-coli outbreak. Twitter outpaced the news for recent terror attack in NYC, reporters are using social media to be well informed and cover a larger scope that wasn’t possible with traditional sources.

Current applications of AI discussed by the panel

  • At IBM, AI helps make augmented decisions such as determining pain points, formulating possible solutions to these issues, and developing methods to improve these processes.
  • Many companies have too much data that is stored in “silos”, and within various formats such as Microsoft Office files, PDFs, or paper. The best way for these companies to proceed is to extract data within these formats and put into digitized forms that can be read by AI algos. The expansion of available data increases the efficiency of the algos.
  • Financial analysts have the ability to evaluate data that wasn’t easily available. They don’t need to change their evaluation processes, but they can certainly look beyond historical information and incorporate real-time data to form opinions and make more informed decisions.
  • AI outputs never have a single solution; they provide an optimal answer with a degree of confidence and margin of error. With these outputs, people can change their opinions or biases when provided optimal or enhanced information.
  • If a data set is relatively limited for an AI algorithm, a method of reinforcement learning can be used. Algorithms will self-adapt when they make mistakes and are rewarded (i.e. reinforced) from correct decisions, similar to how people might train animals.

Issues facing AI applications discussed by the panel

  • AI is a collection of tools, which attempt to make an optimal decision based on available data and a defined set of parameters. If the data and parameters aren’t well known to those creating the report (“Black Box”), the results can’t be effectively evaluated by end-users or understood by consumers.
  • People need to properly frame the problems their attempting to solve and define the relevant data. AI applications can produce many results from available data, but it’s very important that inputs are well understood and within the framework of the issues to resolve.
  • Technology maybe evolving a bit too fast and control of this spread is a concern. OpenAI is an organization helping to develop framework to address both positives and negatives from these innovative AI solutions.
  • More data will usually help improve the efficiency of these algos. Clients of AI programs have to contribute relevant and clean data. If clients are overprotective or stop providing data, they can’t effectively use AI applications to learn or grow in the evolving competitive environments.
2018-02-21T16:09:44+00:00 November 5th, 2017|Categories: Event Videos, Galleries, Gallery, Multimedia, Past Events, Recaps, VIDEOS|