One of the things that is on our clients’ minds when filing patents is whether the invention is detectable – after all, a patent is of limited use if you can’t detect infringement.
There seems to be a general assumption in the industry that Machine Learning (ML) models can’t be reverse engineered due to their black box nature, and that there is limited value in pursuing patents for “hidden AI”.
To get to the bottom of the detectability question, we sat down with Chris Whittleston, Delivery Lead at Faculty, one of the UK’s well known AI companies, for an informal chat about reverse engineering. We asked him what might be detectable in different scenarios; this is a summary of what we learnt.
If you can get hold of a software product, e.g. an executable file, then it is highly likely that you can get a handle on whether a machine learning model is being used
Many AI inventions fall into the category of what is known in Europe as “Applied-AI”. That is, they use known AI models in an innovative way.
In this scenario, if an Applied-AI invention were incorporated into a product, it is highly likely that open-source libraries would be used to create and deploy the ML model. Afterall, open source libraries are reliable, well tested, and well, why re-invent the wheel? Various open-source libraries exist and the code is freely available for download.
If an opensource library were used in a product, then this would be detectable through decompilation. Decompilation is a reverse engineering technique which recreates high-level source code that is human readable, from compiled executable code (which is not human readable). Although decompilation allows an Engineer to reconstruct functions and variables called by the executable code, the original names and labels for these variables and functions are replaced with random character strings. The replacement is performed in a consistent manner however throughout the decompiled code, meaning that variables, function names and the like are each replaced with an individual random character string.
Whilst the names and labels given to the various functions in the decompiled code may be unintelligible, the structure of these functions can be cross-referenced to known functions found in open source libraries. Thus, through this cross-referencing, an Engineer can recognise and identify the functions being called upon in the decompiled code.
It is therefore likely to be possible to identify known neural network code structures such as code used to create the layers of the neural network or code used for performing common, machine learning tasks (e.g., back propagation) in this way.
Regarding the identification of specific variables within the decompiled code, variables can often be traced through the various functions, and this tracing can provide clues as to the datatype of the inputs to a neural network.
Thus, if a patent claim described a neural network with particular inputs, it may be possible to determine from reverse engineered code that a neural network is present in the code (if a function is identified corresponding to an open-source neural network) and also, to determine what inputs are being provided to it.
Outputs are usually more difficult to identify from the decompiled code, not least because custom-made code can be used to deal with this data. However, assuming you can successfully trace a given variable through the various functions applied to it, you may still be provided with sufficient information to deduce the output data type.
In conclusion, using decompilation, it is likely to be possible to determine whether a feature of “using a machine learning model to predict x from y” is being infringed.
What about CORE-AI inventions, where the inventor has invented a new model architecture, or a new method of training a model?
Another type of ML Invention is what we refer to in Europe as a Core-AI invention. Core-AI inventions are where the inventor has made a fundamental improvement to the field of AI itself. For example, a new and improved model architecture, or a new way of training a neural network.
For these types of inventions, the invention is unlikely to be implemented with open source code alone, because by their very nature, such inventions represent new ML models or processes for training ML models (e.g. the types of processes that eventually end up in open source libraries).
However, it is still likely that open source libraries would be used to build up the new model – albeit with additional, bespoke code to implement the new feature. Taking an example of a new neural network having a new and improved layer, the above discussed cross-referencing technique can be applied in order to identify the various functions used to build up the new model.
In this scenario, code structures that do not match up completely to the known, open source code structures may indicate new segments of code which may have been added to implement a core-AI invention. If the functionality of the new segments of code can be inferred, then infringement of a core-AI invention may be demonstrated.
With respect to new methods of training, these can be identified in a similar manner – especially the functions responsible for the pre-processing of training data. That is, it is unlikely that a commercial product would be built using custom functions for standard data processing techniques (e.g., standard libraries would most likely to be used to implement processes such as rotation of an image, convolution or the like) and thus different combinations of such techniques that may be used to build up a new method of training may readily be determined from decompiled code.
The presence of on-going training per se may also be inferred if a model shows drift. For example, an AI model undergoing no training (i.e., a static AI model), when iteratively provided with the same input, would be expected to produce the same output for every iteration. On the other hand, if the iterative outputs were to drift away from the initial output, this would indicate that the AI model is being trained (i.e., it is a non-static AI model).
Is it viable?
In summary, it seems that, given a compiled executable file, in principle, many commercial implementations of ML algorithms can be reverse engineered to a level that could be used to infer patent infringement. However, despite this, the time and expense required for such a process will likely render it unviable in many circumstances.
It will also be apparent to the reader that the solutions above apply in circumstances where the executable file is available to decompile; yet many ML models will be made available as services, via the cloud, their executable code effectively kept behind closed doors.
In these circumstances, the only available mechanism for gaining any information is by querying the service. In some circumstances, a targeted query pattern may be able to enable certain details about the underlying model to be obtained. For example, training data may be built up (in the manner of an extraction attack) and in some circumstances, the type of model and architectural detail may be determined, through targeted querying around a decision boundary.
Given the complexity of the AI models currently deployed, it is easy enough to assume they simply cannot be reverse engineered. However, even from our brief discussion, it became apparent to us that broad, sweeping statements like this are just not true; given enough time, AI models can be reverse engineered (as is the case with many other programs).
Whether this is commercially viable on a frequent basis is another question, however even a partial decompilation may be sufficient to justify bringing legal proceedings, especially given that the disclosure phase of litigation could be used to obtain fuller information.
Lastly, even if the time and cost factors of reverse engineering are off-putting, this shouldn’t dissuade an applicant from patenting AI as future developments in AI regulation and standardisation will affect the ability of commercial AI models to remain hidden. For example, the European Union is actively working on proposals to regulate AI, which will likely require the inner workings of AI products used in certain industries to be made transparent in order to ensure they meet European regulatory requirements.
Therefore, applicants should carefully consider whether they wish to forgo patent protection for their AI models based on a generalised assumption that all AI infringement is undetectable.
For more information about patenting “Hidden AI”, including standardisation and transparency laws that are on the horizon, see our article Technology Trends – Why Patent Your Hidden AI?
This is for general information only and does not constitute legal advice. Should you require advice on this or any other topic then please contact firstname.lastname@example.org or your usual Haseltine Lake Kempner advisor.