VeriML Enabling Integrity Assurances and Fair Payments for Machine Learning as a Service

Abstract : Machine Learning as a Service (MLaaS) allows clients with limited resources to outsource their expensive ML tasks to powerful servers. Despite the huge benefits, current MLaaS solutions still lack strong assurances on: 1) service correctness (i.e., whether the MLaaS works as expected); 2) trustworthy accounting (i.e., whether the bill for the MLaaS resource consumption is correctly accounted); 3) fair payment (i.e., whether a client gets the entire MLaaS result before making the payment).Without these assurances, unfaithful service providers can return improperly-executed ML task results or partiallytrained ML models while asking for over-claimed rewards. Moreover, it is hard to argue for wide adoption of MLaaS to both the client and the service provider, especially in the open market without a trusted third party.In this paper, we present VeriML, a novel and efficient framework to bring integrity assurances and fair payments to MLaaS. With VeriML, clients can be assured that ML tasks are correctly executed on an untrusted server and the resource consumption claimed by the service provider equals to the actual workload.We strategically use succinct non-interactive arguments of knowledge (SNARK) on randomly-selected iterations during the ML training phase for efficiency with tunable probabilistic assurance. We also develop multiple ML-specific optimizations to the arithmetic circuit required by SNARK. Our system implements six common algorithms: linear regression, logistic regression, neural network, support vector machine, Kmeans and decision tree. The experimental results have validated the practical performance of VeriML.
 • We implement vCNN and compare it with the existing zkSNARKs in terms of size and computation. • The proposed scheme improves the key generation/proving time 25 fold and the CRS size 30 fold compared with the state-of-art zk-SNARK scheme. • This paper proposes a new efficient verifiable convolutional neural network (vCNN) framework which accelerates the proving performance tremendously. • To increase the proving performance, we propose a new efficient relation representation for convolution equations.
 ? In our system, a client C outsources a machine learning task to a server S with a training dataset D. S trains a prediction model M according to a certain ML algorithm and parameters. ? After the training phase, C submits challenges r to verify the execution of the learning algorithm, and in turn, S returns the corresponding proofs p without providing M. ? The main purpose of our work is to solve the problem in verifying the integrity of ML model training, i.e., proving that S has actually executed the specified computation task. ? Beyond that, we also aim to ensure the integrity of prediction services provided by S without revealing the ML model to C.
 The proposed scheme is a verifier-friendly zk-SNARK scheme with a constant proof size, and its verification time complexity linearly depends on the input and output only, regardless of convolution intricacy. We propose an efficient construction of vCNN to verify the evaluation of the whole CNNs. Our construction includes QPP-based zk-SNARKs optimized for convolutions and QAP-based zk-SNARKs effectively working for Pooling and ReLU, and interconnects them using CPSNARKs.
 ? We develop a new commit-and-prove protocol that can verify the loop iterations when training machine learning models with high efficiency. ? The detailed theoretic analysis demonstrates that our scheme can detect incorrect computations with high probability. ? We implement the VeriML framework with six popular ML algorithms: linear regression, logistic regression, neural network, support vector machine, Kmeans and decision tree on four real-world datasets to demonstrate its performance. ? Our results show that VeriML has a practical overhead as well as a negligible accuracy loss in training.
Download DOC Download PPT

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us :