Research projects
I have been associated with the following projects since my graduate
studies in the EECS department at University of California, Berkeley.
- Challenges in TV White Space operations in India
Students: Gaurang Naik, Naireeta
Kansabanik, Sudesh Singhal, Garima Maheshwari, and Meghna Khaturia
Research staff: Rajeev Paniyar
TV white space operation has picked up by regulators and technology
companies in the United States and United Kingdom in the past decade.
Overall, some exciting times are ahead with Super-WiFi and IEEE 802.11af
type of technologies on the horizon. Not much work has been done in the
Indian context for TV white spaces. India consists of one of the fastest
growing telecommunication market in the world. Exploration and innovation
of technologies suitable for white space or unlicensed operation in the TV
band in India is the focus of our work.
- Spatial sampling, quantization, and reconstruction of physical fields
Students: Akshta Athawale,
Kaushani Majumder, Ankur Mallick, Abhinav Kumar, and Ajinkya Jayawant
Summer students: Alankrita Bhatt
(IIT Kanpur, BTech), Nikita Vasudevan (LNMIIT Jaipur, BTech)
Remote sensing of physical phenomenon by an array of sensors is of
interest. Since spatially distributed physical fields cannot be
prefiltered, usual techniques like "prefiltering" from centralized sampling
setup cannot be used. Using these natural constraints, a host of sampling
problems can be explored. In one such problem, we have explored the A/D
conversion of bounded dynamic-range two-dimensional smooth non-bandlimited
fields. We provide upper-bounds on pointwise error between the field and
its reconstruction in terms of the spectral properties of the field. More
problems are under exploration.
- Oversampling and ADC precision tradeoffs in sampling of signals
Students: Akshta Athawale, Aishwarya
Goyal, T. V. Srikanth, Ayush Baid, and Ashray Malhotra
Summer students: Swati Vyas (IIT
Guwahati, BTech)
In any signal A/D conversion process, bits are essential. For reducing
signal values into bits, analog to digital converters (ADCs) are essential.
This process is called as quantization. Each ADC is accompanied by a
precision, which determines how accurate the quantization process is. Lack
of quantization precision can be compensated by suitably designed
oversampling technique. This opens avenues for research problems related to
(design of) sampling or A/D conversion schemes of signals. One would expect
ADC precision to tradeoff against oversampling ratio. Such tradeoffs are
the subject of examination of this research.
- SRAM reliability models for bit-level failures
Students: Amrut Kolhapure, Sreeja
Vasantham, Gautam Kapila, and Sonal Gupta
With technology scaling, static random access memory (SRAM) leakage power
is expected to consume a large fraction of the total power budget.
Reduction of the SRAM supply-voltage, to reduce this leakage power, is
impeded by reliability concerns arising due to various failure mechanisms:
(i) soft-errors, (ii) oxide-trap induced current fluctuations, (iii)
parametric failures due to process variations, and (iv) supply-voltage
noise. These failures are transient in nature, and hence these failures
will couple with each other. Currently, separate supply-voltage margins are
provisioned for each of these failure mechanisms to ensure smooth operation
of SRAM within some target reliability. This work aims to explore and
understand bit-error probability of SRAM cell as a function of the supply
voltage.
- System-level power-optimization in SRAM: a reliability perspective
Advisors: Jan Rabaey and Kannan
Ramchandran
Reduction of the SRAM supply-voltage, to reduce the leakage power, is
impeded by reliability concerns arising due to various failure mechanisms:
(i) soft-errors, (ii) oxide-trap induced current fluctuations, (iii)
parametric failures due to process variations, and (iv) supply-voltage
noise. A statistical or probabilistic setup is used to model failure
mechanisms like soft-errors or process-variations, and error-probability of
stored data is used as a metric for reliability. Error models which combine
various SRAM cell failure mechanisms are developed. In a probabilistic
setup, the bit-error probability increases due to supply voltage reduction,
but it can be reduced by suitable choices of error-correction code and
data-refresh (scrubbing) rate. The leakage-power — including redundancy
overhead, coding power, and data-refresh power — is set as the
cost-function and an error-probability target is set as the constraint. The
cost-function is minimized subject to the constraint, over the choices of
data-refresh rate, error-correction code, and supply voltage. The
optimization procedure is evaluated using failure-rate simulations for 90nm
and 65nm CMOS technology.
- Error-correction code detection from observed noisy binary data
Students: Arti Yardi, Ashok Vardhan, and
Sai Bhargav
For cognitive radio networks, it is helpful for the secondary to have a
knowledge of the codebook of the primary. Otherwise, in security or
military applications, a third party may want to discover the
error-correction code being used in communication. In this work, schemes to
detect or estimate the error-correction code being used by a transmitter is
being studied. Of particular interest is the case when data is affected by
a binary symmetric channel, thereby making the observed data as noisy.