The advancement of generative models, including Generative Adversarial Networks (GANs) and Stable Diffusion models, has created an urgent need for effective detection techniques to identify Artificial Intelligence (AI)-generated images. This study evaluates the effectiveness of three detection methods: MaskSim, a CLIP-ViT-based Universal Fake Detector (UFC), and a Florence-2-based approach FLODA (FLorence-2 Optimized for Deepfake Assessment). Six synthetic datasets, encompassing three categories (human faces, wild animals, and objects) were analysed, with images generated using Style GANs and Stable Diffusion models. The results obtained demonstrate that FLODA achieves robust performance across datasets, while MaskSim and UFC show varying effectiveness depending on the dataset and generator. Fourier spectrum analysis highlights the importance of noise residuals in identifying model artifacts. These findings provide critical insights into the strengths and limitations of existing methods, underscoring the need for adaptable detection techniques to ensure reliable identification of AI-generated content, with significant implications for digital forensics and deepfake mitigation.
Comparative Evaluation of Synthetic Image Detectors: Insights from Generative Adversarial Networks and Stable Diffusion Generators
	
	
	
		
		
		
		
		
	
	
	
	
	
	
	
	
		
		
		
		
		
			
			
			
		
		
		
		
			
			
				
				
					
					
					
					
						
							
						
						
					
				
				
				
				
				
				
				
				
				
				
				
			
			
		
			
			
				
				
					
					
					
					
						
							
						
						
					
				
				
				
				
				
				
				
				
				
				
				
			
			
		
			
			
				
				
					
					
					
					
						
							
						
						
					
				
				
				
				
				
				
				
				
				
				
				
			
			
		
		
		
		
	
Abate, Andrea;Cimmino, Lucia
;Polsinelli, Matteo
			2025
Abstract
The advancement of generative models, including Generative Adversarial Networks (GANs) and Stable Diffusion models, has created an urgent need for effective detection techniques to identify Artificial Intelligence (AI)-generated images. This study evaluates the effectiveness of three detection methods: MaskSim, a CLIP-ViT-based Universal Fake Detector (UFC), and a Florence-2-based approach FLODA (FLorence-2 Optimized for Deepfake Assessment). Six synthetic datasets, encompassing three categories (human faces, wild animals, and objects) were analysed, with images generated using Style GANs and Stable Diffusion models. The results obtained demonstrate that FLODA achieves robust performance across datasets, while MaskSim and UFC show varying effectiveness depending on the dataset and generator. Fourier spectrum analysis highlights the importance of noise residuals in identifying model artifacts. These findings provide critical insights into the strengths and limitations of existing methods, underscoring the need for adaptable detection techniques to ensure reliable identification of AI-generated content, with significant implications for digital forensics and deepfake mitigation.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


