XAI
Explaining the causal relationships in the results output by AI - exhaustively verifying all table data possibilities, presenting explanations for judgment results and on-site improvement actions.
Explaining the causal relationships in the results output by AI - exhaustively verifying all table data possibilities, presenting explanations for judgment results and on-site improvement actions.
Fujitsu's explainable learning technology detects adversarial example attacks, overcoming false identification of image recognition AI.
This technology enables domain experts to validate image classifications made by AI, by using a neuro-symbolic approach to provide explanations for those classifications.
Present reasons for AI judgements and field improving actions by exploring all hypotheses from input table data.