Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
We are researching novel ways to use computer vision, crowdsourcing, and deep learning algorithms to teach machines to interact with humans, improve affect recognition, and perform general-purpose video emotion classification.
Published:
Short description of portfolio item number 2
Published in Ultrasonic Imaging and Tomography, SPIE Medical Imaging, 2020
In this work, we present a machine learning method to guide an ultrasound operator towards a selected area of interest. Unlike other automatic medical imaging methods, ultrasound imaging is one of the few imaging modalities where the operator’s skill and training are critical in obtaining high quality images. Additionally, due to recent advances in affordability and portability of ultrasound technology, its utilization by non-experts has increased. Thus, there is a growing need for intelligent systems that have the ability to assist ultrasound operators in both clinical and non-clinical scenarios. We propose a system that leverages machine learning to map real time ultrasound scans to transformation vectors that can guide a user to a target organ or anatomical structure. We present a unique training system that passively collects supervised training data from an expert sonographer and uses this data to train a deep regression network. Our results show that we are able to recognize anatomical structure through the use of ultrasound imaging and give the user guidance toward obtaining an ideal image.
Download here
Published in IEEE Computer Vision and Pattern Recognition Workshops, CVPRW, 2020
Sparse coding algorithms have been used to model the acquisition of V1 simple cell receptive fields as well as to accomplish the unsupervised acquisition of features for a variety of machine learning applications. The Locally Com- petitive Algorithm (LCA) provides a biologically plausible implementation of sparse coding based on lateral inhibi- tion. LCA can be reformulated to support dictionary learn- ing via an online local Hebbian rule that reduces predictive coding error. Although originally formulated in terms of leaky integrator rate-coded neurons, LCA based on lateral inhibition between leaky integrate-and-fire (LIF) neurons has been implemented on spiking neuromorphic processors but such implementations preclude local online learning. We previously reported that spiking LCA can be expressed in terms of predictive coding error in a manner that allows for unsupervised dictionary learning via a local Hebbian rule but the issue of stability has not previously been ad- dressed.
Download here
Published in IEEE Computer Vision and Pattern Recognition, CVPR, 2020
In a general sense, adversarial attack through perturbations is not a machine learning vulnerability. Human and biological vision can also be fooled by various methods, i.e. mixing high and low frequency images together, by altering semantically related signals, or by sufficiently distorting the input signal. However, the amount and magnitude of such a distortion required to alter biological perception is at a much larger scale. In this work, we explored this gap through the lens of biology and neuroscience in order to understand the robustness exhibited in human perception. Our experiments show that by leveraging sparsity and modeling the biological mechanisms at a cellular level, we are able to mitigate the effect of adversarial alterations to the signal that have no perceptible meaning. Furthermore, we present and illustrate the effects of top-down functional processes that contribute to the inherent immunity in human perception in the context of exploiting these properties to make a more robust machine vision system.
Download here
Published in International Conference on Neuromorphic Systems, ICONS, 2020
We present research in the modeling of neurons within Drosophila (fruit fly) olfaction. We describe the process from data collection, to model creation, and spike generation. Our approach utilizes com- putational elements such as spiking neural networks that employ leaky integrate-and-fire neurons with adaptive firing behavior that more closely mimick biological neurons. We describe the methods of several learning implementations in both software and hard- ware. Finally, we present both quantitative and qualitative results on learning spiking neural network models.
Download here
Published:
wHealth is an interactive storytelling application that can provide insight into a user’s willingness to pay for health care, provide insight into how quality information, and compare aggregate rates or perform subgroup analyses, i.e. gender/age/income differences in what factors are most important. We use gamification techniques such as storytelling, personalization, and immediate feedback to drive user engagement. We are excited to announce that we are the first place winner of the Robert Wood Johnson Foundation Games to Generate Data Challenge and the recipient of $100,000!
Published:
In the field of digital pathology, there is an explosive amount of imaging data being generated. Thus, there is an ever growing need to create assistive or automatic methods to analyze collections of images for screening and classification. Machine learning, specifically deep learning algorithms, developed for digital pathology have the potential to assist in this way. Deep learning architectures have demonstrated great success over existing classification models but require massive amounts of labeled training data that either doesn’t exist or are cost and time prohibitive to obtain. In this project, we present a framework for representing, collecting, validating, and utilizing cytopathology features for improved neural network classification.
Published:
Spatiotemporal Sequence Memory for Prediction using Deep Sparse Coding. For our project, we sought to create a predictive vision model using spatiotemporal sequence memories learned from deep sparse coding. This model is implemented using a biologically inspired architecture: one that utilizes sequence memories, lateral inhibition, and top-down feedback in a generative framework.
Published:
Our experiments show that by leveraging sparsity and modeling the biological mechanisms at a cellular level, we are able to mitigate the effect of adversarial alterations to the signal that have no perceptible meaning. Furthermore, we present and illustrate the effects of top-down functional processes that contribute to the inherent immunity in human perception in the context of exploiting these properties to make a more robust machine vision system.
Published:
The goal of the group is to keep up with the literature/state of the art, learn about others research interests, form possible collaborations, and provide a venue for those to practice upcoming talks or presentations. Youtube Channel
Published:
Sponsored by Drexel University’s College of Computing & Informatics(CCI) and CCI’s Diversity, Equity & Inclusion Council, join us for a conversation about fighting bias in artificial intelligence (AI). Mathematical models are often viewed as fair and objective. One might think that algorithms do not “see” race and therefore cannot be prejudiced; they base their decisions upon big data patterns and correlations that arise from statistics. However, in an experiment conducted by the American Civil Liberties Union (ACLU), Amazon’s face recognition system falsely matched 28 members of U.S. Congress with mugshots where false matches were disproportionately of people of color. In the past few years, studies have shown that algorithms can exhibit racial and gender bias, discriminate within a computer-vision facial recognition systems, and encode gendered bias in natural language processing. As AI becomes more pervasive in consumer-based technology, it is important that considerations be taken to prevent bias in the algorithmic decision making process. Panelists will share their knowledge of this developing topic and discuss current projects.
Published:
In this talk, I present some of our more recent work in multipath sparse coding with applications in mitigating bias in face recognition.
Class, Drexel University, Computer Science Department, 2020
This course covers the fundamentals of modern statistical machine learning. Lectures will cover fundamental aspects of machine learning, including dimensionality reduction, overfitting, ensemble learning, and evaluation techniques, as well as the theoretical foundation and algorithmic details of representative topics within clustering, regression, and classification (for example, K-Means clustering, Support Vector Machines, Decision Trees, Linear and Logistic Regression, Neural Networks, among others). Students will be expected to perform theoretical derivations and computations, and to be able to implement algorithms from scratch. Here is a short excerpt from the class describing what is machine learning…