This change of pace is no coincidence, as both PyTorch Geometric and the Deep Graph Library were officially presented in workshops on that occasion, accompanied by workshop paper publications. There's also a slightly more maintained rate of interest from May 2019. Regarding interest between the single GDL libraries, there's a definite uptick in competition and overall interest values from the beginning of 2019. In the comparison graph with TensorFlow their web searches barely make a dent. How do GDL libraries fare in the context of deep learning frameworks? Left: Google Trends for "pytorch geometric", "deep graph library", and "graph nets" search terms / Google Trends Query | Right: Google Trends as in Left graph with additional "tensorflow" search term / Google Trends Queryįrom the charts above, it is quite apparent that GDL libraries are still in a tiny niche of early adoption within the Deep Learning community, as evidenced by the periods with close to no web searches. Regarding GitHub, relying only on public data skews our view as this will only contain open-source projects, which only constitute a fraction of the whole picture. A survey has a plethora of biases that have to be taken into account when interpreting the data. The advantage of Web Search data is that it is quite close to an unbiased view of what interests people. survey data from StackOverflow or GitHub public repository data. Furthermore, there are alternative, equally valuable proxies, e.g. In China, the frameworks above even gain the upper hand, as can be seen below.Īs Baidu has a far larger market share in China, and these results only pertain to Google web searches, we have to take this data with a grain of salt. Interestingly, when closing in on specific countries for the previous query, this picture changes quite a bit. Google Trends for "PyTorch" and "TensorFlow" search terms, and "C" search topic (programming language) / Google Trends Query It is evident from the chart that there is substantial interest, even more so if one takes into account that we're comparing frameworks with a general-purpose programming language. The values in the graphs below are normalized web searches for a better comparison between different topics, and are aggregated worldwide. We compared search terms to search topics, so bear in mind that there are more terms hidden within the latter (e.g. As it is commonplace for software developers, irrespective of their background, to use search engines to aid in gathering information before and during projects, we'll use Google Trends as a proxy for relevancy. Most offer a basic set of layers and functions with support for multi-threading on the CPU and offloading of parallel computations to the GPU, while relying on automatic differentiation for quickly computing gradients for backpropagation.īut to what extent have Deep Learning frameworks permeated software development processes? To answer this question, we need a proxy for the relevancy of these frameworks. The list of frameworks and libraries is extensive, and the primary focus they have varies, but there are still commonalities between them. In the end we'll discuss some of the key differences between the libraries, and conclude with advice on which library to use for three typical use-cases: scientific research, production-level development, and casual hobby.Īmong the many innovations sparked by advances in Deep Learning was the creation of software development frameworks specific for this field. We provide the tools used for this analysis, and you can easily customize them to your use-case and requirements. The comparison is performed by way of a morphological analysis based on an exemplary use-case. We'll briefly introduce some relevant details on how libraries for GDL model this irregular data and computations on it, focusing on what consequences this has for the end-user.įinally, as promised in the post's title, we'll briefly discuss and compare each of the current main libraries for GDL. This poor fit necessitates the development of new libraries for effective and efficient computation. Despite performing well on various tasks, the irregular data structures in GDL cannot be fit easily into existing Deep Learning frameworks. We also showed relevant tasks on which state-of-the-art performance can be achieved by employing GDL approaches. For this post, it is sufficient to know that GDL is deep learning performed on irregular data structures, such as graphs, meshes, and point clouds. Critically, we outlined what makes GDL stand out in terms of its potential. In our last post introducing Geometric Deep Learning we situated the topic within the context of the current Deep Learning gold rush.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |