M. Al-shedivat, A. Dubey, and E. P. Xing, Contextual explanation networks, 2017.

D. Alvarez-melis and T. S. Jaakkola, A causal framework for explaining the predictions of black-box sequence-to-sequence models, 2017.

B. Babic, S. Gerke, T. Evgeniou, and I. G. Cohen, Algorithms on regulatory lockdown in medicine, Science, vol.366, issue.6470, pp.1202-1204, 2019.

J. M. Balkin, Information fiduciaries and the first amendment, UCDL Rev, vol.49, p.1183, 2015.

S. Bang, P. Xie, W. Wu, and E. Xing, Explaining a black-box using deep variational information bottleneck approach, 2019.

S. Bhattacharyya, D. Cofer, D. Musliner, J. Mueller, and E. Engstrom, Certification considerations for adaptive systems, 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp.270-279, 2015.

M. Borg, C. Englund, K. Wnuk, B. Duran, C. Levandowski et al., Safely entering the deep: A review of verification and validation for machine learning and a challenge elicitation in the automotive industry, Journal of Automotive Software Engineering, vol.1, issue.1, pp.1-19, 2019.

L. Breiman, Algorithm cart. Classification and Regression Trees, California Wadsworth International Group, 1984.

L. Breiman, Random forests, Machine Learning, vol.45, pp.5-32, 2001.

J. Burrell, How the machine 'thinks': Understanding opacity in machine learning algorithms, Big Data & Society, vol.3, issue.1, p.2053951715622512, 2016.

C. Chen, O. Li, D. Tao, A. Barnett, C. Rudin et al., This looks like that: deep learning for interpretable image recognition, Advances in Neural Information Processing Systems, pp.8928-8939, 2019.

W. J. Clancey, The epistemology of a rule-based expert system-a framework for explanation, Artificial intelligence, vol.20, issue.3, pp.215-251, 1983.

C. Coglianese and D. Lehr, Transparency and algorithmic governance, Administrative Law Review, vol.71, p.1, 2018.

W. W. Cohen, F. Yang, and K. R. Mazaitis, Tensorlog: Deep learning meets probabilistic dbs, 2017.

C. Cortes and V. Vapnik, Support-vector networks, Machine learning, vol.20, issue.3, pp.273-297, 1995.

J. Dessalles, Des intelligences TRÈS artificielles, 2019.

F. Doshi-velez, M. Kortz, R. Budish, C. Bavitz, S. Gershman et al., Accountability of ai under the law: The role of explanation, 2017.

F. Buc, V. Andrés, and J. Nadal, Rule extraction with fuzzy neural network, International journal of neural systems, vol.5, issue.01, pp.1-11, 1994.

N. Ernest, D. Carroll, C. Schumacher, M. Clark, K. Cohen et al., , 2016.

, Genetic fuzzy based artificial intelligence for unmanned combat aerial vehicle control in simulated air combat missions, Journal of Defense Management, vol.6, issue.1, pp.2167-0374

, Communication from the commission to the european parliament, the council, the european economic and social committee and the committee of the regions -building trust in human centric artificial intelligence (com(2019)168), European Commission, 2019.

, White paper on artificial intelligence -a european approach to excellence and trust (com(2020)65 final), European Commission, 2020.

. Europol, From suspicion to action -converting financial intelligence into greater operational impact, Europol, 2017.

M. Fabre-magnan, De l'obligation d'information dans les contrats -essai d'une théorie, 1992.

M. Felici, T. Koulouris, and S. Pearson, Accountability for data governance in cloud ecosystems, 2013 IEEE 5th International Conference on Cloud Computing Technology and Science, vol.2, pp.327-332, 2013.

R. Fisher, Linear discriminant analysis, Ann. Eugenics, vol.7, p.179, 1936.

R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti et al., A survey of methods for explaining black box models, ACM computing surveys (CSUR), vol.51, issue.5, p.93, 2018.

J. Haugeland, Artificial Intelligence: The Very Idea, 1985.

A. Hleg, High-level expert group on artificial intelligence, Ethics Guidelines for Trustworthy AI, 2019.

D. Houtsieff, La motivation en droit des contrats, Revue de droit d, p.19, 2019.

. Ico, Project explain interim report, 2019.

, Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems, IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, 2019.

, Machine learning in anti-money laundering -summary report, International Finance, 2018.

B. Kim, M. Wattenberg, J. Gilmer, C. Cai, J. Wexler et al., Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav), 2017.

P. Kindermans, S. Hooker, J. Adebayo, M. Alber, K. T. Schütt et al., The (un)reliability of saliency methods, 2017.

J. A. Kroll, S. Barocas, E. W. Felten, J. R. Reidenberg, D. G. Robinson et al., Accountable algorithms, U. Pa. L. Rev, vol.165, p.633, 2016.

R. Kuhn and R. Kacker, An application of combinatorial methods for explainability in artificial intelligence and machine learning (draft), 2019.

Z. Kurd and T. Kelly, Safety lifecycle for developing safety critical artificial neural networks, Computer Safety, Reliability, and Security, pp.77-91, 2003.

G. Lee, D. Alvarez-melis, and T. S. Jaakkola, Towards robust, locally linear deep networks, International Conference on Learning Representations, 2019.

G. Lee, W. Jin, D. Alvarez-melis, and T. S. Jaakkola, Functional transparency for structured data: a game-theoretic approach, 2019.

D. Lehr and P. Ohm, Playing with the data: what legal scholars should learn about machine learning, UCDL Rev, vol.51, p.653, 2017.

S. M. Lundberg and S. Lee, A unified approach to interpreting model predictions, Advances in Neural Information Processing Systems, pp.4765-4774, 2017.

W. E. Mackay, Users and Customizable Software: A Co-Adaptive Phenomenon, 1990.

L. Mascarilla, Fuzzy Rules Extraction and Redunduncy Elimination: an Application to Remote Sensing Image Analysis, International Journal of Intelligent Systems, vol.12, issue.11, pp.793-818, 1997.

W. J. Maxwell, Smart (er) Internet Regulation Through Cost-Benefit Analysis: Measuring harms to privacy, freedom of expression, and the internet ecosystem, 2017.

D. A. Melis and T. Jaakkola, Towards robust interpretability with selfexplaining neural networks, Advances in Neural Information Processing Systems, pp.7775-7784, 2018.

M. Minsky and S. Papert, , 1969.

. Oecd, Artificial Intelligence in Society, 2019.

. Oecd, Recommendation of the Council on Artificial Intelligence, 2019.

G. E. Peterson, Foundation for neural network verification and validation, Science of Artificial Neural Networks II, vol.1966, pp.196-207, 1993.

M. T. Ribeiro, S. Singh, and C. Guestrin, Why should i trust you?: Explaining the predictions of any classifier, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp.1135-1144, 2016.

M. T. Ribeiro, S. Singh, and C. Guestrin, , 2016.

F. Rosenblatt, The perceptron, a perceiving and recognizing automaton Project Para Report No. 85-460-1, 1957.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning representations by back-propagating errors, nature, vol.323, issue.6088, pp.533-536, 1986.

R. E. Schapire, The strength of weak learnability, Machine learning, vol.5, issue.2, pp.197-227, 1990.

F. Schauer, Giving reasons, Stanford Law Review, pp.633-659, 1995.

A. Selbst and S. Barocas, The intuitive appeal of explainable machines, SSRN Electronic Journal, vol.87, 2018.

D. Selsam, M. Lamm, B. Bünz, P. Liang, L. De-moura et al., Learning a sat solver from single-bit supervision, 2018.

R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh et al., Grad-cam: Visual explanations from deep networks via gradient-based localization, Proceedings of the IEEE International Conference on Computer Vision, pp.618-626, 2017.

E. H. Shortliffe and B. G. Buchanan, A model of inexact reasoning in medicine, Mathematical Biosciences, vol.23, issue.3, pp.351-379, 1975.

K. Simonyan, A. Vedaldi, and A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps, 2013.

D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg, Smoothgrad: removing noise by adding noise, 2017.

J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, Striving for simplicity: The all convolutional net, 2014.

C. J. Stone, Consistent nonparametric regression. The Annals of Statistics, pp.595-620, 1977.

M. Sundararajan, A. Taly, Y. , and Q. , Axiomatic attribution for deep networks, Proceedings of the 34th International Conference on Machine Learning, vol.70, pp.3319-3328, 2017.

W. Swartout, C. Paris, M. , and J. , Explanations in knowledge systems: Design for explainable expert systems, IEEE Expert, vol.6, issue.3, pp.58-64, 1991.

P. S. Thomas, B. Castro-da-silva, A. G. Barto, S. Giguere, Y. Brun et al., Preventing undesirable behavior of intelligent machines, Science, vol.366, issue.6468, pp.999-1004, 2019.

N. Tishby, F. C. Pereira, and W. Bialek, The information bottleneck method, 2000.

G. G. Towell and J. W. Shavlik, Knowledge-based artificial neural networks, Artificial intelligence, vol.70, issue.1-2, pp.119-165, 1994.

, Proposed regulatory framework for modifications to artificial intelligence/machine learning (ai/ml)-based software as a medical device, US Food and Drug Administration, 2019.

S. Wachter, B. Mittelstadt, R. , and C. , Counterfactual explanations without opening the black box: Automated decisions and the gpdr, Harv. JL & Tech, vol.31, p.841, 2017.

B. Waltl and R. Vogl, Explainable artificial intelligence-the new frontier in legal informatics, Jusletter IT, vol.4, pp.1-10, 2018.

M. Welling, Are ml and statistics complementary?, IMS-ISBA Meeting on 'Data Science in the Next 50 Years, 2015.

A. F. Winfield and M. Jirotka, The case for an ethical black box, Annual Conference Towards Autonomous Robotic Systems, pp.262-273, 2017.

R. H. Wortham, A. Theodorou, and J. J. Bryson, What does the robot think? transparency as a fundamental design requirement for intelligent systems, Ijcai-2016 ethics for artificial intelligence workshop, 2016.

K. Yeung, A. Howes, and G. Pogrebna, Ai governance by human rightscentred design, deliberation and oversight: An end to ethics washing. The Oxford Handbook of AI Ethics, 2019.

Q. Zhang, Y. Nian-wu, and S. Zhu, Interpretable convolutional neural networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.8827-8836, 2018.

, tion and regression trees (CART) (Breiman, 1984) and their bagged version random forest (RF) (Breiman, 2001), as well as boosting-based aggregation, 1990.

, Connecting a substantial number of perceptrons with (continuous) non-linear transformation yielded the whole area of (deep) neural networks (NNs)