mangsense commited on
Commit
3b0aa07
·
verified ·
1 Parent(s): dd0fb08

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -31
README.md CHANGED
@@ -1,45 +1,46 @@
1
  ---
2
- language: en
 
 
 
 
 
 
 
3
  datasets:
4
- - codexglue
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
- # CodeBERT fine-tuned for Insecure Code Detection 💾⛔
8
 
 
9
 
10
- [codebert-base](https://huggingface.co/microsoft/codebert-base) fine-tuned on [CodeXGLUE -- Defect Detection](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) dataset for **Insecure Code Detection** downstream task.
11
 
12
- ## Details of [CodeBERT](https://arxiv.org/abs/2002.08155)
13
 
14
- We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing.
15
 
16
- ## Details of the downstream task (code classification) - Dataset 📚
17
-
18
- Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code.
19
-
20
- The [dataset](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) used comes from the paper [*Devign*: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks](http://papers.nips.cc/paper/9209-devign-effective-vulnerability-identification-by-learning-comprehensive-program-semantics-via-graph-neural-networks.pdf). All projects are combined and splitted 80%/10%/10% for training/dev/test.
21
-
22
- Data statistics of the dataset are shown in the below table:
23
-
24
- | | #Examples |
25
- | ----- | :-------: |
26
- | Train | 21,854 |
27
- | Dev | 2,732 |
28
- | Test | 2,732 |
29
-
30
- ## Test set metrics 🧾
31
-
32
- | Methods | ACC |
33
- | -------- | :-------: |
34
- | BiLSTM | 59.37 |
35
- | TextCNN | 60.69 |
36
- | [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) | 61.05 |
37
- | [CodeBERT](https://arxiv.org/pdf/2002.08155.pdf) | 62.08 |
38
- | [Ours](https://huggingface.co/mrm8488/codebert-base-finetuned-detect-insecure-code) | **65.30** |
39
 
 
 
 
40
 
41
- ## Model in Action 🚀
 
42
 
 
43
  ```python
44
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
45
  import torch
@@ -56,6 +57,11 @@ logits = outputs.logits
56
  print(np.argmax(logits.detach().numpy()))
57
  ```
58
 
59
- > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
 
 
60
 
 
61
 
 
 
 
1
  ---
2
+ language:
3
+ - code
4
+ license: apache-2.0
5
+ tags:
6
+ - code
7
+ - security
8
+ - vulnerability-detection
9
+ - codebert
10
  datasets:
11
+ - code_x_glue_cc_defect_detection
12
+ pipeline_tag: text-classification
13
+ widget:
14
+ - text: |
15
+ import java.sql.*;
16
+ public class Example {
17
+ public void query(String input) {
18
+ String sql = "SELECT * FROM users WHERE name = '" + input + "'";
19
+ }
20
+ }
21
  ---
22
 
23
+ # CodeBERT fine-tuned for Java Vulnerability Detection
24
 
25
+ CodeBERT model fine-tuned for detecting security vulnerabilities in Java code.
26
 
27
+ ## Model Description
28
 
29
+ This model is fine-tuned from [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) for binary classification of secure/insecure Java code.
30
 
31
+ ## Intended Uses
32
 
33
+ - Detect security vulnerabilities in Java source code
34
+ - Binary classification: Safe (LABEL_0) vs Vulnerable (LABEL_1)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
+ ## How to Use
37
+ ```python
38
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
39
 
40
+ tokenizer = AutoTokenizer.from_pretrained("mangsense/codebert_java")
41
+ model = AutoModelForSequenceClassification.from_pretrained("mangsense/codebert_java")
42
 
43
+ # run code
44
  ```python
45
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
46
  import torch
 
57
  print(np.argmax(logits.detach().numpy()))
58
  ```
59
 
60
+ ## Training Data
61
+
62
+ Trained on CodeXGLUE Defect Detection dataset.
63
 
64
+ ## Limitations
65
 
66
+ - Focused on Java code only
67
+ - May not detect all types of vulnerabilities