Hello! I am a second-year PhD student at UC Berkeley and part of Laura Waller's Computational Imaging Lab as well as Jacob Yates' Active Vision and Neural Computation Lab.
There is a significant gap between how brains and computers perceive the world. This gap poses challenges for tasks that require interpreting complex, dynamic environments like self-driving cars. To address this, I am using insights from neuroscience to develop new theoretical and empirical methods that don't suffer from current limitations. By combining principles of brain-like perception with modern computational methods, my research aims to bridge this divide and enable safer, more adaptive, and more efficient autonomous systems. I have lots in progress, but if you are interested in learning more you can check out my recent work on Poisson Variational Autoencoders. There is also an available early draft for follow-up theoretical and experimental work here.
Summary
During my undergraduate degree, I developed a toolbox that relies on the characteristics of the human visual system to improve the display design process (BiPMAP). It has been deployed by CIVO and various companies for applications in VR/AR and mobile phones. While interning at Illumina, I built a framework to unify all genetic sequencing services which is now deployed under the name DRAGEN-Reports. Since I graduated, I have been working on building data-driven models of the early primate visual system at unprecedented spatiotemporal scale and rigor (fixational-transients). In addition, I have been working with event-based cameras, which are designed to capture dynamics, and are inspired by the retina. Recently I showed theoretically and empirically that the noise in event cameras can be used to recover the static scene, which was thought to require additional hardware to capture (Noise2Image). I am working on doing visual vibrometry with event cameras, a project still in progress.
I am grateful for the support of both:
The NSF Graduate Fellowship
The Center for Innovation in Vision and Optics (CIVO) Fellowship
in sponsoring my research and studies.
Recent Projects
Spotlight @ NeurIPS 2024
Hadi Vafaii, Dekel Galor, Jacob L. Yates
Introduce an algorithm for differentiating through Poisson sampling.
Derive generative modeling theory for a variational autoencoder with Poisson approximate posterior.
Unify neuroscience theories of rate coding, predictive coding, efficient coding, sparse coding, neural sampling, etc.
Implement PVAE and demonstrate that it has superior latent representations that are sparse, positive integer, and are more informative for tasks like classification.
ICCP 2023 Spotlight Poster & Demo
Dekel Galor*, Ruiming Cao*, Jacob L. Yates, Laura Waller
Reconstruct static scenes from noise statistics in event-based cameras
Works in post processing, requires no hardware modifications
Applications in
adaptive event denoising
simultaneous denoising & scene estimation
Dekel Galor*, Guanghan Meng*, Laura Waller, Martin Banks
Applying tools from signal processing and vision science to improve the display design process for VR.
Created a novel formula for a human visual model (CSF formula).
Created a GPU-accelerated software toolkit for simulating human perception during display design.
Available as a GPU-accelerated serverless website, and alternatively open-source executable.
SFN 2023 Poster
Dekel Galor, Jude F. Mitchell, Daniel A. Butts, Jacob L. Yates
Uncovering the role of eye movements as a primary driver of neuron activity in the visual cortex.
Demonstrate simple mechanisms that explain this activity only using a feed-forward model! (No recurrence/external motor signals.)
Propose a novel activation function SplitReLU, and show that CNNs using it exhibit emergent physiological selectivity throughout.
Break the traditionally used "upper bound" of explainable variance, showing its likely that fixational eye movements significantly drive activity in the visual cortex.
Show that large ResNets can match more interpretable models with little engineering, and compare between the two using interpretable ML.
State-of-the-art neural data from the primary visual cortex around the center of gaze (!) with spatiotemporal modeling at 240hz (!), during natural free-viewing (!), and using extremely high precision (sub-cone resolution!) DDPI eyetracking.
Dekel Galor* and Ryan Mei*
Video compression, transmission over rf, and decompression.
Good quality for 30+ compression ratio (27.5 PSNR for 1KB/frame)
Lightweight NN for simultaneous deblocking and frame interpolation.
Motion compensation inspired by H.264
Frame differencing and quantization inspired by DPCM
Multilevel Wavelet Decomposition and DCT.
Publications
Please refer to my Google Scholar page.