Massive costs involved in training large generative models has necessitated model reuse and composition to achieve the desired flexibility. In a fruitful collaboration with Massachusetts Institute of Technology (MIT), we show how advanced generative techniques such as Diffusion models and GFlowNets can be composed in a principled manner to go beyond what can be achieved by the individual pretrained models. Our approach paves way for several promising opportunities as we empirically validate our method on image and molecular generation tasks. Work published in NeurIPS 2023.