Introduction
Financial statements are lately being written for machines. Executives of heavily traded companies realise that they are no longer writing disclosures for the general investing public. Consequentially, adversarial techniques can be used to alter financial statements to influence machines’ predictions. In this article, we explored the evidence for this behaviour. Paradoxically, the reason that these adversarial techniques work is because the so-called intelligent machines are yet unable to contextualise as well as humans. Within time adversarial feedback loops will improve the machines capacity to produce and defend against hostile attacks, but it will always remain a cat and mouse game as long as there are no regulatory obstructions.
In other domains, like self-driving cars an adversarial attack might be as simple as placing two or three imperceptible stickers on a stop sign, thereby fooling the machine learning model into thinking that it is a 45mph speed sign with fatal consequences.[1] Machines can be attacked in multiple ways, in a forthcoming book chapter, my co-authors and I highlight sixteen different in which a deep learning options pricing and hedging models can be attacked. Among others, we look at poisoning attacks, perturbation attacks, and reward hacking.
A lot of stock prediction models are driven by alternative data sets that rest on a public copy of the internet, and this data should be treated with caution due to fungibility concerns. Making investment decisions on mutable data could lead to more fragile and error-prone financial systems. Public-facing data started out as useful but is now actively being gamed. The genuine reviews and amassed data are stored and sold via API-access to the highest bidder. The publicly available data is a front for premium subscriptions and the collection of more data.
These effects will lead to the public record becoming corrupted, whereas the legitimate proprietary records will become more accurate but more exclusive. In the future, we should expect the performance of quantitative strategies that rely on publicly mutable alternative data strategies to be significantly dampened. The public-facing fraudulent reviews benefit the data-owner as they can keep the non-fraudulent data separate for paying investor consumption. As of now, publicly accessible alternative data are not overly corrupted and can still be used for some good, such as policy and investment decision-making.
In this post, I will make the argument that known adversarial attacks, colloquially referred to as market manipulation, are less harmful than attacks on unstructured data. Manipulation tactics like Spoofing only have short-term consequences, whereas financial statement and alternative data ‘Spoofing’ have long-term repercussions that can lead to financial bubbles.
Adversarial Reporting Theory
In this post, two hypotheses will be assessed with 2-3 assumptions each.
Adversarial hypothesis 1: Self-reported data like earnings calls, earnings transcripts, financial statements, and other official disclosures are being adversarially instrumentalised by management to increase the stock price.
Management is interested in an increased stock price due to reputation, and stock-price indexed incentives like options.
Keywords, structure, and sentiment in official reports affect the price-level of a stock.
Adversarial hypothesis 2: Mutable alternative data sources are being fraudulently amended to change the value of the stock price.
Investors and management want to change the price of a stock due to directional positions.
Mutable alternative data sources can affect the price of a stock.
Investors and management are able to adjust mutable alternative data.
Feedback Effects
Iceland’s Olafur Arnalds plays his piano alongside robot-playing partners individually programmed to harmonise with his chords. Arnalds says that his interaction with the algorithms has altered his creative process and how he thinks about music. As he steps through a C note, the algorithm would unexpectedly, albeit harmoniously play a sequence of chords that helps him unearth novel sounds. Arnalds would try and anticipate the joint melody of the shadow pianos, and he would feel like he is playing for them as opposed to the crowd, the outcome which is quite spectacular.
When algorithms are installed within a system of information-flow, they often produce feedback effects. Modern examples include Spotify’s algorithms that sit between the producer and the listener, leading to the shortening of songs due to listening preferences. Better known is how the advent of digital media led to the shortening of news articles and how Google’s page rank algorithm transformed the way articles are written[2].
If piano-playing algorithms or algorithms on the internet were our only worry, the world might still keep spinning, but as Nicolas Diakopoulos at Northwestern says, “[a]lgorithms are now involved in many consequential decisions in government, including how people are hired or promoted, how social benefits are allocated, how fraud is uncovered and how risk is measured across a number of sectors.”[3] Public intellectuals have for some time now warned us about the secondary effect and unintended consequences of algorithms.
In the more prosaic world of financial statement production, we are witnessing a similar evolving trend, company’s legal disclosures are being written based on how algorithms respond to them. These feedback effects will play out for as long as the algorithms compete with one another and improve. Arnalds’ example shows that it is not just algorithms that are adapting but also humans. A good example is the change in user behaviour that we witness from the increased moderation of the web. As algorithms are policing content, users are more than ever self-censoring and speaking in code.[4]
Suppose the algorithmic-feedback process is left to the mercy of fate and one algorithm is, say writing a financial report, and the other assessing the written work. In that case, we could end up in a world where financial statements become unintelligible to humans in the long run. There is no happy equilibrium that would eventually be reached; the story ends with a failure to deliver the systems intended purpose. An apt, meant-to-scare story, is that researchers in 2017 at the Facebook AI Research Lab shut down their project after they discovered that the AI chatbots had created their own language that humans can’t understand at which point it stops serving its true purpose.[5]
Adversarial Techniques
Adversaries can hijack any vulnerable algorithmic system, and this is especially true in finance, where the use of black-box models are becoming more common. In this post, I am particularly interested in scenarios where an adversary seeks to undermine the communication channel for their own pecuniary benefit.
In finance, we have historical and current examples of adversarial attacks where one agent attempts to change the inputs to another agent’s model leading to mispredictions. In high-frequency trading, this can be as simple as spoofing, which Dodd-Frank defines as bidding or offering with the intent to cancel the bid or offer before execution. In essence, false order-book messages are issued, compromising the victim’s algorithmic predictions, allowing the attacker to capitalise on their mistakes.
Spoofing and an assortment of other market manipulation techniques are illegal; in 2020, JP. Morgan was fined $1bn for treasury and precious metals market manipulation using spoofing techniques; one trader was explaining on a message board that he was simply doing “a little razzle dazzle to juke the algos...”[6]
There has recently been a paper written on just this subject that explains and empirically tests a plausible adversarial strategy in a reinforcement learning setting, “[an algorithmic] adversary can perturb the order book by placing (and cancelling) their own orders. These adversarial orders quickly appear on the public exchange and are fed directly into victim models.”[7]
Most notably, these attacks are not cheap “…there are challenges to attacks on order book data. An adversary’s malicious orders must be bounded in their financial cost and detectability. Moreover, the attacker cannot know the future of the stock market, and so they must rely on universal attacks that remain adversarial under a wide range of stock market behaviours. An adversary’s knowledge of the victim model is also limited; thus, we assess the effectiveness of these universal attacks across model architectures as well.”
The paper also rediscovers known manipulation strategies such as “spoofing” through adversarial algorithms deployed on historical market data. Spoofing is just one of many available manipulation tactics; many have gone unnamed and undiscovered due to a lack of research in this area. I have been part of project proposals to obtain entity-level data from some of the world’s largest exchanges and identify the reward function of individual agents; unfortunately, this data is sensitive, and the legal signoff, even in academia is agonisingly slow. However, it is worth the wait; research in this area will be very telling and put many of our current speculations to rest.
Unstructured Data
Spoofing only alters order book market data which is generally structured in nature. In the future, we should expect to see ‘spoofing’ attempts on alternative, unstructured datasets. The manipulation of market data leads to short-lived, transient changes in the asset price, whereas unstructured data manipulation could have quarterly or even annual effects.
If the manipulation of alternative data can lead to long term changes in the stock price, should it not be at the top of regulators’ agenda? Moreover, order-book manipulation is expensive, whereas alternative data manipulation can be cheap and virtually free.
The adversarial susceptibility of non-market data will be assessed in the next section to highlight that data reported by companies or other third parties can also be manipulated. It will branch out into two hypotheses that have implications for both investors and managers alike.
Collecting the Evidence
Company Disclosures
In the past, executives used crude techniques like hyped-up earnings calls, earnings management practices, hints of potential for mergers and acquisitions, and many other shenanigans to push the value of a stock up. Up and till the machine revolution, different signalling forms were staged at an intuitive level, informed only by management’s experience public disclosures.
However, executives, script-writers, and publication companies now have access to years of unstructured data and quality research showing how certain sentiment patterns and specific keyword combinations can alter the stock price in a favourable and unfavourable direction. Company disclosures fall under the first adversarial hypothesis because it is readily editable by management and not investors.
It has been shown that “[m]achine downloads of quarterly and annual reports in the US (scraped by an algorithm rather than read by a human) has rocketed from about 360,000 in 2003 to 165m in 2016 (from 39% to 78% of downloads).”[8] It is also true that an increasing group of quantitative funds are actively listening and predicting your company’s next period outcomes like earnings or the probability of announcing a new merger. Instead of merely recognising words like “merger” or “acquisitions”, the deep learning models are looking at the interactions between 1000s of data points including features assessing the sentiment around words like “expand”, the tone of management when speaking about “an excess of cash”, and other statistics like paragraph length, word count, the time of the earnings call, and how these variables evolve over time.
A number of executives of heavily traded companies are realising that they are no longer writing disclosures and having conference calls for the general public, but rather for machines that use the latest natural language processing toolkits to sniff out performance correlations. This has led to the SEOification of company disclosures. It is a trend that is not stopping anytime soon; the ability to use machines to read and digest text and voice data permits analyst to focus on other tasks and allows trading decisions to be made faster. This productivity-speed duality will push this trend forward at lightning speed. It will not be long until we see GPT-3 level expertise in companies’ legal and voluntary disclosure documents[9].
It will not be the first machine learning-powered methods were used to power task in finance. Skilled staff at JP Morgan Chase have suffered a similar fate; a new contract intelligence programme was established to interpret commercial-loan agreements that previously required 360k hours of legal work by lawyers and loan officers[10]. Other tasks include the automation of post-allocation requests at UBS[11]and policy pay-outs at Fukoku Mutual Life Insurance[12]. All I can say is hold onto your hat; at least four projects that I know of/or are involved in would make 360k hours look like small change.
Let’s look at the evidence that this is already happening. Researchers at GSU have recently shown that a one-standard-deviation change in machine downloads (EDGAR filing downloads by a non-human) led to a 0.24 standard deviation increase in machine readability. In contrast, human downloads had no apparent correlation, meaning that management is taking note of their newfound audience and adjusting their content appropriately.[13] In this paper, machine downloads are proxied by identifying IP addresses that download more than 50 unique firm’s filings on a given day. Whereas machine readability is a combined number looking at, among other things, the ease with which tables and number can be extracted from text, including the use of familiar characters and the lack of external supplements.
Simply improving machine readability doesn’t mean that management or the public relations office are taking on an adversarial role (yet). There is, however, plentiful evidence that firms ‘manage’ the general sentiment and tonality of their disclosures. The researchers show that firms with high machine downloads are more likely to avoid negative words in their reports to construct an improved sentiment[14]. And not just any words, but the words that can be found in public lexicons known to be negative in a financial context, such as those produced by Loughran and McDonald since 2011.
It has also been shown that managers vocal expressions can convey information valuable to analysts.[15] As a result, tests have been performed to assess the valence and arousal of managerial speeches. Indeed, the voices of managers of firms with higher machine downloads exhibit more excitement and positivity all else equal (i.e., control variables). Still, adjustments in the readability of statements and the tonality of conference calls are not targeted adversarial attacks, and evidence for targeted attacks is yet to be found and evidenced.
In saying that, the technology and infrastructure are ready for targeted attacks. The staggering number of disclosures that can be backed into a timeline of stock prices makes it easier than ever to reverse-engineer agent’s decision-making processes to ‘guide’ the inputs of a victim’s machine learning model. As of today, anyone with a few Azure or Google credits can use publicly available disclosures and market data to identify the effect that specific words could have, on say, an earnings surprise prediction model, or better yet you can do it blindly by simply consulting the literature doing the experiments for you, some of which will be discussed in the next section.
So what will a targeted attack look like? Using feature importance measures, management could, for example, reconstruct ten or so reidentified data patterns that are known to drive quantitative funds’ merger and acquisition prediction model up from say 10% to 60%, without ever having a genuine plan to acquire any company, with the added plausible deniability of never using the words “merge”, “acquire” or their synonyms.
As long as these machines remain so easily fooled, management can iteratively test various techniques to see what they can get away with. The easiest would be to deliberately implant a positive (within-context) statement or quote into an objectively negative disclosure to attempt a “poisoning attack” on the document’s sentiment and tonality. Within time the machines will be reprogrammed once they notice that they are being attacked and are losing money — it is then yet again up to the publisher (management) to find a way around the adversarial defence to introduce further attacks as part of this dynamic environment. In the above example, I can imagine a machine re-parametrising to check for stark contrasts in sentence-to-sentence sentiment instead of document-level sentiment as a possible defence.
Within time humans might be removed from this adversarial loop, and machines will converse with machines. Even today before the coming robotic process automation, management can still use the latest adversarial machine learning techniques to cryptically communicate falsehoods without any direct legal consequences due to a large buffer of plausible deniability.
Management can use these attacks to massage their stock price upwards for additional stock-index bonuses, especially if they can draw on a golden parachute for a soft landing. Remuneration is not the only incentive; adversarial tactics can also be used to shake short-sellers or activist investors off management’s back.
It becomes quite troubling when the advent of new technologies is retrofitted over legacy systems as the original use might fall out of fashion. At this point, regulators have to ask themselves whether these disclosures are still serving their purpose. Maybe there is a need for further standardisation or even a legally mandated, at arm’s length, third-party publishers that sits between the company and the public. Still, it isn’t immediately clear whether any of these would lead to a more efficient solution.
Campbell’s Law is appropriate here,” [t]he more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”[16] The structure and content of company disclosures are increasingly relied upon by algorithmic trading systems, and we should therefore expect some level of targeted manipulation, if not now, then in the near future.
Alternative Data
Adversarial attacks on alternative data will be nothing new. In the past rogue agents have injected false information in the EDGAR filing system and hack Twitter accounts to manipulate the stock prices.[17] In 2013 the Associated Press’s Twitter account was hacked, and it was falsely reported that there was an explosion in the White House leading to an immediate drop in the Dow[18]. It is alleged that sentiment algorithms were partly responsible for the initial drop by ‘analysing’ and trading on the tweet as if it was real news.
Even job listings or executive jet records would qualify as alternative data. This is not science fiction, in 2018, the shares in a small drug cancer company called Geron Corporation spiked 25% after the parent company Johnson & Johnson posted a job listing referring to the fact that a key regulatory decision is imminent. In 2017 the flight details of a Gulfstream V’s were used to predict a $10bn dollar investment by Warren Buffet. Knowing that algorithmic traders are paying attention to these sources gives adversarial investors the upper-hand; they can now implant misleading data points in the information channel, i.e., fake job listing, or orchestrate misleading flights.
There is an endless number of ways to attempt adversarial attacks, in this section, we will investigate the ‘perfect’ adversarial attack as performed by investors, looking at multiple concurrent attack examples on employee and company reviews, job listings, product reviews, company information, and news reports, all of which can lead to unwarranted changes in the stock price of a company.
First things first, are these data sources used by quantitative funds? The answer to this seems to be unequivocal yes, as evidenced through multiple interviews captured by financial journalists. There is also evidence of this scattered all through the internet, one company Vertical Knowledge, for example, sells simple Glassdoor data for $30k an annual pop, and thousands of datasets can be found all over the web. I have also performed some small-scale experiments on Glassdoor data in 2016 and found that it could be used to form a portfolio strategy that earns statistically significant profits. Whether this data has been priced in as of 2021 is an open question.
Also earlier in 2016, MaryJo Fitzgerald, the Corporate Affairs Manager at Glassdoor, wrote that they “hear from and talk to investors who use the data on our site all the time”[19] BlackRock, the world’s largest asset manager, has also been reported to use Glassdoor data for investment decision making.[20] Extending away from stock market effects, Yelp data has also shown promise when used to predict local economic outlooks and has been shown to be useful for policy analysis.[21] As a consequence, fraudulent reviews can help curry favour for regional councils if used as part of a nowcasting policy agenda.
In a past experiment performed in 2017, I gathered around 1200 data points for all the publicly traded restaurants by developing scrapers for Yelp, Spyfu, Similarweb, Morningstar, Linkedin, Instagram, Glassdoor, Facebook, Eat24, Doordash, Angellist (see GitHub). The project drew upon public data points for more than 24k individual facilities of larger chains. This data was used to identify the competitive dynamics between chains. Using supervised machine learning, I was able to show, as part of an open-source interactive corporate report, that at the end of 2017, among other things, BJ’s Restaurants was 40% undervalued compared to their competitors. A renewed version has been picked up by a large chain that responded by saying that “if investors are indeed using this data, they know more about our firm than we do.” It is expensive to keep the report running, so the web-app is disabled, but feel free to look through the data outputs.
In the process of working with alternative data, I have become aware of how easy it is to introduce fraudulent data points into public data sources. I have also become aware of the incentives that condition this behaviour. Research by academics at HBS shows that a one-star increase in a Yelp rating can lead to a company’s revenue increasing 5 to 9 percent. Therefore, management and investors have an incentive to produce fraudulent reviews to attract customers or to make money on a trade. Research indeed shows that at least 16% of Yelp reviews are fraudulent as of 2016.[22] In the future, we should expect that the performance of quantitative strategies that rely on publicly mutable alternative data strategies to be significantly dampened by this adversarial race. This will become a growing concern as even more sophisticated companies like Aamzon are stuggling to remove the damage done by bots.
Paradoxically fraudulent reviews play to the favour of many publicly scrapable review websites for a few reasons. First, it allows them to establish a ‘special’ relationship with small companies to help them clean up notionally ‘fraudulent’ reviews, i.e. bad reviews. And these special relationships would cost companies around $300 p/m. With both Glassdoor and Yelp, you can have negative reviews flat out removed by a signing up to some premium service, at least anecdotally. Second, by having a dataset disturbed by poor quality and fraudulent reviews, the company can package a special version of the data that is not publicly available to be sold to hedge funds or be made available using exclusive APIs. The public-facing data started out as useful, but now it is gamed, the authentic reviews and data are stored and sold via private distribution networks.
As alternative data gains more prominence in academia, management, and investment circles, these concerns will become ever more apparent. We will see robots locked in endless zero-sum information warfare campaign using computationally expensive adversarial techniques: a sort of ‘active measures’ for publicly traded companies.
Circling back to something more traditional, for many decades news articles have been used for trading stocks. In fact, it has been used since the dawn of the publicly-traded company. As soon as seven years after the creation of the Dutch East Indian Company (DEIC) in 1602, a newsletter has been established to report on the fortune of DEIC ships including the type and quantity of cargo on board.[23]
Unlike the 17th-century investor newsletters, today’s news-outlets are being actively monitored with computers to perform entity, context, and sentiment recognition. Targeted adversarial attacks can be performed on press-releases to persuade algorithms in a particular direction. Although not necessarily targeting each other’s stock price, FANG-firms have reportedly hired ‘arm’s length’ PR firms to write negative articles about each other[24]. As seen before, these targeted attacks can also happen to small establishments, enabled by review websites like Google Reviews, Facebook Reviews, and Yelp[25].
In 2019 I had a conversation with Paul Glasserman discussing the use machine learning in financial economics, and he told me that one of his colleagues at Columbia obtained a substantial corpus of news articles from Thomson Reuters that they want to use somehow to study its association with stock market returns. They released a paper in late 2020 that shows how news article topics can be used to explain stock returns, employing more than 90k articles across five years. Table two of their paper lists topic groupings with their associated coefficients in explaining the returns.[26] The unsupervised LDA technique they use identifies a collection of words and morphemes (subparts) as part of 200 topics; below I show the top three negative and positive topics (a topic consist of a group of words) with coefficients.
Negative:
price cut drop fall fell lower decline analyst quarter weak (-0.098)
inside https video watch morn trade short transcript open (-0.011)
climate coal carbon emiss energie cruise gas fuel environment norway (-0.009)
Positive :
quarter revenue analyst expect earn estim profit result forecast rose (0.015)
korea korean hospit hca seoul tenet oper kim jin lee (0.011)
deal offer buy acquisit cash close merger bid combin agree (0.011)
Public relations offices can use this research conducted from 2015-2019 to gauge what news article content is most related to positive and negative returns and then construct prediction-worthy articles to dupe a trading algorithm with access to the same data. They would probably want to limit the topics to 20 or so topics avoid overfitting while simultaneously not being overly specific, so as to be caught out.
It is not just publicly available data that are mutable and at risk for adversarial attacks; data brokers such as credit bureaus, healthcare firms, and credit card companies collect troves of customer data that they sell to hedge-funds. With some effort, these alternative datasets can also be compromised. For example, genuine-looking credit card transactions can be fabricated with immediate refunds or transfers across alternative channels. Using this data, the algorithms and traders will fall victim to the poisoned aggregated credit card flows that would have ordinarily helped them predict company revenue before they are publicly released.
Even without fraudulent or conscious dataset attacks, many alternative data sources already contain biases or implicit feedback attacks that could have negative implications for prediction models. For example, alternative data has been used to forecast infectious disease spread and has failed dismally like the infamous Google Flu forecast fiasco.[27] In this example, when the infection spreads, the media attention around it accumulates and disturbs any meaningful trends that could otherwise be discerned from Google. This particular concern would make it hard for any big-data or nowcasting system that relies on social media or other fungible data during new and unprecedented events.
Afterthought
The consequence of alternative data’s fungibility extends further than stock trading. Yelp data has shown promise when used to predict local economic outlooks and has been shown to be useful for policy analysis.[28] Consequently, fraudulent reviews can help curry favour for regional councils if used as part of a nowcasting policy agenda. In the wake of the pandemic, governments have also started to rely on alternative data sources. In 2020, governments had almost universally pushed for more timely statistics in the form of alternative data. For example, Eurostat has signed agreements from Airbnb, Booking, Expedia, and TripAdvisor to access their data on short-term accommodations.[29] And the Federal Reserve, for instance, received 3-day lagged credit card data which it has reportedly heavily leaned on during the pandemic.[30] The information contained within alternative data will soon have global consequences with many hands in the pot.
Last year, a friend prodded me to listen to a company’s earnings conference call that his team suspected of fraud. As I tuned in, I noticed a smoothness in the call that was not comparable to anything I heard in person. As the first few minutes passed, I realised that the call was pre-recorded. In the second third of the call, I started paying attention to the language of the call and was surprised with the use of slogans, catchphrases, and growth figures that were scattered around almost indiscriminately without much use to the listener. There was no narrative and story told, no humour or insights, just pitiful statistics atomised into bits of facts, the natural preference for newly minted analysts that yielding powerful computers.
The consequence of this is quite simple, financial statements, conference calls, and publicly mutable data will soon become useless and unintelligible to the average investor; data will be engineered to be ingested by machines. Public reviews’ truthfulness will be suspect for as long as there is no corroboration and certification process for online data. Within time public data might benefit from some cryptographic identity keys, something that would also become essential in the times of deep fakes. For example, future cameras might give a picture or illustration n proof-of-origin stamp, something that Adobe is working on.[31]
Identifying adversarial attacks will be hard, and machines will have to be used by supervising and regulatory authorities to screen for plausible attacks. If supervisors are willing to take on this role, they will quickly realise that there are more adversarial attacks that can effectively be dealt with. Consequentially, I predict that they will instead keep focusing on market manipulation methods like Spoofing that in all likelihood is less devastating than adversarial reporting attacks, but more widely known and therefore better able to draw the public’s attention to the excellent work being done by the regulator.
Notwithstanding the new alternative data spoofing techniques, order book spoofing that has been around for many years are in itself hard and almost impossible to detect when performed well, so even the prosecution of well-known and illegal adversarial attacks is a challenging problem. The sheer amount of data produced from high-frequency trading across many financial products and venues makes it extremely hard to trace in real-time who is behind every trade. A central clearing party mainly has access to the broker ID through which the trade has been channelled, resulting only in aggregated information. Furthermore, a potential spoofer might post those trades through different venues and brokers. Third, aside from a loose definition, it is unclear how a spoofing strategy differs quantitatively from other strategies. The complexity of quantifying and discriminating spoofing strategies from legitimate ones will carry over to the alternative data domain with more considerable repercussions.
Future Day Trader:
“I bought short-term out-of-money put options on robinhood early morning and by evening, I promoted a fake twitter campaign by getting an anonymously published post to the top of reddit’s /finance, using click-farm software, targeting keywords historically negatively correlated with UBSFY stock movements, while publishing concurrent fake reviews on google, glassdoor, and facebook using selenium”
References
[1] https://arxiv.org/pdf/1707.08945.pdf
[2] https://www.prsformusic.com/m-magazine/features/song-length-the-spotify-effect/#:~:text=Overall%20the%20average%20number%20one,four%20minutes%20and%2030%20seconds.
[3] https://www.theneweconomy.com/technology/the-troubling-influence-algorithms-have-on-how-we-make-decisions
[4] https://www.socialcooling.com/
[5] http://www.digitaljournal.com/tech-and-science/technology/a-step-closer-to-skynet-ai-invents-a-language-humans-can-t-read/article/498142
[6] https://www.bloomberg.com/news/articles/2020-09-29/jpmorgan-pays-920-million-admits-misconduct-in-spoofing-probe
[7] https://www.groundai.com/project/adversarial-attacks-on-machine-learning-systems-for-high-frequency-trading/2
[8] https://www.nber.org/papers/w27950
[9] https://openai.com/blog/openai-api/
[10] https://www.bloomberg.com/news/articles/2017-02-28/jpmorgan-marshals-an-army-of-developers-to-automate-high-finance
[11] https://www.ft.com/content/da7e3ec2-6246-11e7-88140ac7eb84e5f1?mhq5j=e6
[12] http://fortune.com/2017/01/06/japan-artificial-intelligenceinsurance-company/
[13] https://www.nber.org/system/files/working_papers/w27950/w27950.pdf
[14] https://www.nber.org/system/files/working_papers/w27950/w27950.pdf
[15] https://sci-hub.st/https://doi.org/10.1111/j.1540-6261.2011.01705.x
[16] https://en.wikipedia.org/wiki/Campbell%27s_law
[17] https://www.cnbc.com/2019/01/15/international-stock-trading-scheme-hacked-into-sec-database-justice-dept-says.html
[18] https://foreignpolicy.com/2013/04/23/syrian-electronic-army-takes-credit-for-hacking-ap-twitter-account/
[19] https:/www.quora.com/Is-Glassdoor-com-useful-to-make-better-investments
[20] https:/www.businessinsider.com/how-blackrock-uses-alternative-data-for-impact-investing-2016-6
[21] https://conifer.rhizome.org/snowde/the-finance-parlour/https:/www.hbs.edu/faculty/Publication\%20Files/18-022_b618d193-9486-4de3-abc4-232e1baecbeb.pdf
[22] https://dash.harvard.edu/bitstream/handle/1/22836596/luca,zervas_fake-it-till-you-make-it.pdf?sequence=1
[23] https://pure.uva.nl/ws/files/1427391/85961_thesis.pdf
[24] https://www.cnbc.com/2018/11/14/facebook-hired-pr-firm-that-wrote-negative-articles-about-rivals-nyt.html
[25] https://www.cnbc.com/2018/11/14/facebook-hired-pr-firm-that-wrote-negative-articles-about-rivals-nyt.html
[26] https://arxiv.org/pdf/2010.07289.pdf
[27] https://cacm.acm.org/magazines/2016/6/202655-what-happens-when-big-data-blunders/fulltext
[28] https://conifer.rhizome.org/snowde/the-finance-parlour/https:/www.hbs.edu/faculty/Publication\%20Files/18-022_b618d193-9486-4de3-abc4-232e1baecbeb.pdf
[29] https://ec.europa.eu/eurostat/web/products-eurostat-news/-/CN-20200305-1
[30] https://www.ft.com/content/9e0e2038-6131-11e9-a27a-fdd51850994c
[31] https://www.wired.com/story/photoshop-id-images-photoshopped-deepfake/