Adam Małkowski
supervisor: Paweł Wawrzyński
In the last years, many solutions and models were designed to realize standard AI tasks, e.g., classification, regression, or generation. To allow using those tools for specific data types which haven’t fixed numerical representation (e.g., text, video, or audio) were created the idea of embeddings (transforms a particular object, e.g., a sentence, into a fix-sized numerical vector). That transformation allows for the processing of that object in the same way as standard tabular data.
One of the most complex data types is graphs. Despite the wide usage of graph data (molecules, topologies, social media, relationships), graph datasets are still less popular than text or image datasets. Moreover, they have several very uncomfortable features like lack of natural order, ambiguity representing, and difficulties with working on different graph sizes or comparing graphs. Existing neural models for graph processing are currently of lower maturity than analogical for other data types.
In my research, I propose a new neural architecture for processing graph data – recursive autoencoder (ReGAE). The model encodes examples into fixed-size embeddings and tries to reconstruct original samples based on those embeddings. A single instance of ReGAE could process graphs with various nonrestricted sizes. The model, besides calculating embeddings, could be used as a base model for graph generation (VAE, GAN), transformation, or classification.