Multi Scale Frequency Enhanced Graph Neural Networks
reviews Summary
Briefly summarize the paper and its contributions. This is not the place to critique the paper; the authors should generally agree with a well-written summary.
This work decomposes input features into low-frequency and high-frequency components via wavelet decomposition, and analyzes the effect of different frequencies on the performance of GNNs as the depth increases. This work also proposes separate information propagation and consistency regularization to utilize multi-scale frequency features.
Strengths And Weaknesses
Please provide a thorough assessment of the strengths and weaknesses of the paper, touching on each of the following dimensions: originality, quality, clarity and significance. You can incorporate Markdown and Latex into your review. See /faq.
Strengths: 1. This paper studies GNNs through the lens of wavelet transformation, which may provide new understanding between graph signals and GNNs. 2. The presentation is clear.
Weaknesses: 1. Lack of highly relevant literature discussion. Thus the novelty and significance of this work need to be better clarified: 1. The consistency regularization term shares very similar formuation and theoretical analysis about model depth with P-reg in [1], i.e., \(L_{reg}=\|\hat{A}Z - Z\|\). Similarly, by controlling the regularization coefficient, \(Z\) can approximate different frequency features. Thus P-reg should be explicitly discussed and compared. 2. Also, Graph Wavelet Neural Network [2] should be also highly relevant, which utilizes wavelet transformation to tackle limitations of GCN or other spectral method. 2. The scale of the graph datasets in experiments are relatively small, which can be trained in full-graph manner. Thus, it would be better to include results on larger graphs and more diverse graphs, to demonstrate the performance and scalability of the proposed method. 3. Are there any guidance on choosing appropriate frequency features other than hyperparameter searching? This could be a critical problem when using the proposed method in pratical.
[1] Rethinking Graph Regularization for Graph Neural Networks, Han Yang, Kaili Ma, James Cheng, AAAI 2021 [2] Graph Wavelet Neural Network, Bingbing Xu, Huawei Shen, Qi Cao, Yunqi Qiu, Xueqi Cheng, ICLR 2019
Questions
Please list up and carefully describe any questions and suggestions for the authors. Think of the things where a response from the author can change your opinion, clarify a confusion or address a limitation. This can be very important for a productive rebuttal and discussion phase with the authors.
Limitations
Have the authors adequately addressed the limitations and potential negative societal impact of their work? If not, please include constructive suggestions for improvement. Authors should be rewarded rather than punished for being up front about the limitations of their work and any potential negative societal impact.
Soundness
Please assign the paper a numerical rating on the following scale to indicate the soundness of the technical claims, experimental and research methodology and on whether the central claims of the paper are adequately supported with evidence.
4 excellent
3 good
2 fair
1 poor
Presentation
Please assign the paper a numerical rating on the following scale to indicate the quality of the presentation. This should take into account the writing style and clarity, as well as contextualization relative to prior work.
4 excellent
3 good
2 fair
1 poor
Contribution
Please assign the paper a numerical rating on the following scale to indicate the quality of the overall contribution this paper makes to the research area being studied. Are the questions being asked important? Does the paper bring a significant originality of ideas and/or execution? Are the results valuable to share with the broader NeurIPS community.
4 excellent
3 good
2 fair
1 poor
Rating
Please provide an "overall score" for this submission.
10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations.
9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
1: Very Strong Reject: For instance, a paper with trivial results or unaddressed ethical considerations
Confidence
Please provide a "confidence score" for your assessment of this submission to indicate how confident you are in your evaluation.
5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.