Thought Leadership

Rothko thought leadership features across multiple publications and venues. We seek to continue to lead AI research applied to investment over the coming years. We believe this allows us to identify the most promising technologies to better exploit the inefficiencies of non-US equity markets and, ultimately, deliver superior return and portfolio characteristics for our clients.

Part of our story is our collaboration with academia. Rothko is a partner of the City, London University Data Science Institute (DSI) and a participant in the Gillmore Centre for Financial Technology at Warwick University.

Featured publications

Interpretable, Transparent, and Auditable Machine Learning: An Alternative to Factor Investing

By Dan Philps, PhD, CFA, David Tilles, and Timothy Law

Interpretability, transparency, and auditability of machine learning (ML)-driven investment has become a key issue for investment managers as many look to enhance or replace traditional factor-based investing. The authors show that symbolic artificial intelligence (SAI) provides a solution to this conundrum, with superior return characteristics compared to traditional factor-based stock selection while producing interpretable outcomes. Their SAI approach is a form of satisficing that systematically learns investment decision rules (symbols) for stock selection, using an a priori algorithm, avoiding the need for error-prone approaches for secondary explanations (known as XAI). The authors compare the empirical performance of an SAI approach with a traditional factor-based stock selection approach, in an emerging market equities universe. They show that SAI generates superior return characteristics and would provide a viable and interpretable alternative to factor-based stock selection. Their approach has significant implications for investment managers, providing an ML alternative to factor investing but with interpretable outcomes that could satisfy internal and external stakeholders.

Machine Learning: Explain It of Bust

By Dan Philps, PhD, CFA

“If you can’t explain it simply, you don’t understand it.”

And so it is with complex machine learning (ML).

ML now measures environmental, social, and governance (ESG) risk, executes trades, and can drive stock selection and portfolio construction, yet the most powerful models remain black boxes.

ML’s accelerating expansion across the investment industry creates completely novel concerns about reduced transparency and how to explain investment decisions.

Frankly, “unexplainable ML algorithms [ . . . ] expose the firm to unacceptable levels of legal and regulatory risk.”

In plain English, that means if you can’t explain your investment decision making, you, your firm, and your stakeholders are in deep trouble. Explanations — or better still, direct interpretation — are therefore essential.