Robust Graph Representation Learning for Local Corruption Recovery

reviews

Summary

Briefly summarize the paper and its contributions. This is not the place to critique the paper; the authors should generally agree with a well-written summary.

This paper studies the problem of feature corruption detection and recovery. Through unsupervised graph autoencoder, this work identifies local corrupted features and generate mask accordingly. With generated mask, ADMM is adopted to solve the optimization problem to get recovered robust representation.


Strengths And Weaknesses

Please provide a thorough assessment of the strengths and weaknesses of the paper, touching on each of the following dimensions: originality, quality, clarity and significance. You can incorporate Markdown and Latex into your review. See /faq.

Strengths: 1. This work studies robust graph representation through node feature masks, instead of commonly used edge masks, which provide results from alternate angles. 2. The problem definition, assumptions, subproblems, and parameter choices are defined and presented clearly.

Weaknesses: 1. The proposed method could be seen as a defense method in adversarial attack literature. However, the experiment results of the proposed MAGnet on CORA and CiteSeer in both attribute injection and meta attack cases are no better than baseline ElasticGNN (considering the variance) in table 1. 2. As mentioned in this paper, feature corruption detection is essentially graph anomaly detection. Thus, the proposed corruption detection method should also be compared with graph anomaly detection baselines, 3. The scale of the graph datasets in experiments is relatively small. It would be better to include results on larger graphs and more diverse graphs. 4. The overhead and complexity of the proposed method should also be carefully discussed.


Questions

Please list up and carefully describe any questions and suggestions for the authors. Think of the things where a response from the author can change your opinion, clarify a confusion or address a limitation. This can be very important for a productive rebuttal and discussion phase with the authors.


Limitations

Have the authors adequately addressed the limitations and potential negative societal impact of their work? If not, please include constructive suggestions for improvement. Authors should be rewarded rather than punished for being up front about the limitations of their work and any potential negative societal impact.


Soundness

Please assign the paper a numerical rating on the following scale to indicate the soundness of the technical claims, experimental and research methodology and on whether the central claims of the paper are adequately supported with evidence.

4 excellent

3 good

2 fair

1 poor

Presentation

Please assign the paper a numerical rating on the following scale to indicate the quality of the presentation. This should take into account the writing style and clarity, as well as contextualization relative to prior work.

4 excellent

3 good

2 fair

1 poor

Contribution

Please assign the paper a numerical rating on the following scale to indicate the quality of the overall contribution this paper makes to the research area being studied. Are the questions being asked important? Does the paper bring a significant originality of ideas and/or execution? Are the results valuable to share with the broader NeurIPS community.

4 excellent

3 good

2 fair

1 poor

Rating

Please provide an "overall score" for this submission.

10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations.

9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.

8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.

7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.

6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.

5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.

4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.

3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.

2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.

1: Very Strong Reject: For instance, a paper with trivial results or unaddressed ethical considerations

Confidence

Please provide a "confidence score" for your assessment of this submission to indicate how confident you are in your evaluation.

5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.

4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.

3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.

2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.

1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.