Artwork        Profile       Contact       About       


 D O T




In the digital era, technological advancements have led to a homogenization effect, blurring individual identity and disrupting a unified sense of self.   AI plays a pivotal role in shaping identity within the digital realm. A recent article, Why Is Virtual Reality Interesting for Philosophers?, explores this through two key concepts: the Epistemic Agent Model (EAM) and the Hybrid Avatar/Virtual Agent System (HAVAS). EAM highlights the communication gap between humans and AI, underscoring the challenge of defining identity in digital spaces. Meanwhile, HAVAS illustrates the complexity of digital identity, shaped not solely by individual agency but through the interplay between humans and AI.













These insights led me to reflect on the relationship between humans and AI in the digital realm. Rather than a mere tool, AI serves as a facilitator, enabling emergence rather than independent creation.  
As an efficient yet limited partner in navigating the digital landscape, AI prompts fundamental questions: How does it perceive, interpret, or potentially misinterpret this world?  


















Perception serves as the foundational step in constructing the digital world. Only by capturing the features and information from the real world can we reconstruct it in the digital realm.Just as the eyes are the key sensors for humans to gather external information, imaging systems act as the eyes of AI to capture and process visual features of the real world.Also,As an ill-posed inverse problem, 3D reconstruction infers three-dimensional information from two-dimensional observations, relying on discrete point clouds to represent 3D geometry. Thus, point clouds serve as a crucial medium through which AI observes, interprets, and reconstructs reality in the digital domain.

































However, due to inherent limitations in imaging systems, image noise inevitably introduces redundant interference into visual data.

During depth conversion, each image consists of three-dimensional points formed by interleaving lines. Grayscale values determine the relative height of each pixel, and corresponding vertices are translated in 3D space, transforming a flat 2D image into a three-dimensional geometric structure.

When AI processes surface images, attempting to convert 2D data into accurate 3D representations in a noisy environment, misinterpretations arise. Noise in 2D pixels disrupts surface reconstruction, causing originally smooth and regular surfaces to appear rough and randomly textured.

Beyond surface distortions, noise also affects microscopic structures in 3D reconstruction. When stacking a series of 2D cross-sections to form a 3D structure, noise alters morphological characteristics, further compromising accuracy.

































































By analyzing the patterns underlying these irregular noise distributions, we turn to mathematical models such as random walks or random-phase sums, which govern noise behavior.  

A typical signal can be expressed as sinusoidal functions of time and space, enabling noise to be transformed into controllable 3D data. Leveraging this approach, I begin exploring ways to reshape the surface and morphology of 3D models.