When I see great images, it feels like I want to simulate them.
Imagine to synthesize an image stream of natural content, for instance the video posted at https://vimeo.com/124858722, in comparable visual fidelity within temporal interactive constraints.
Do we have to employ a suitable data embedded in some high-dimensional space? What about the SDS problem? The sampling problem introduced by Monte-Carlo approaches due to a dimension-mismatch between existing theories and linear computing devices? Remind the relation of visual image synthesis to spatial material synthesis. Recall the challenge of high-frequency distributions within the nonlinear material and transport space!
How can we achieve both effective and efficient synthesis within the visual and acoustic domain, as seen from a theoretical and artistic perspective?
These research questions constitute the basis of my academic endeavours. I do not stop looking for answers. Noteable breakthroughs will be published here, but only on high standards. Therefore, be patient please.