FYP 2011-2012
Proposed by Dr. C.L. Wang
Last Update: June 03, 2011

 

Project 1: DIME -- DIgital ME project

 

Thanks to social networks and various digital technologies, we’re more able to stay connected with people nowadays. Indeed, social networks have penetrated our lives in such a way that their temporal absence would upset us. However, this massive connection has its own downside. People are starting to complain about distraction by interruptive add-friend calls; yet they’d risk being impolite to turn the requests down. Targeting this problem, the DIgital ME (DIME) project proposes the idea of a digital avatar for each human user, acting on behalf of him/her during the course of a social networking and communication. Specifically, the "Digital Avatar" will reside on a user’s smart handheld device. It actively learns about the user’s relationships with corresponding parties and interaction preferences. Upon a communication request, it observes the user’s situation and decides upon a suitable way to respond, or adjust privacy settings so that others can't find you. It could also mimic the user’s interaction pattern, divert the message/call to a mailbox, or explain the user’s situation etc. The project will be implemented in an Android-based platform. This system records a user’s
conversation history and incrementally learns his preferences and behavior patterns. It also interacts with the environmental sensors (e.g., time, location) or personal information system (e.g., to-do lists) to understand a user’s situation, his availability and
willingness to be involved in the communication. Some decision-making mechanism (e.g., common-sense reasoning, rule-based or case-based reasoning) would be adopted to yield the intelligence.

 

Max no. of students: 3

Prerequisite: Familiar with  Google's Android Platform, some interest in Cognitive Science 

 

Project 2: Similar Photo Search on GPU Cluster

 

The project is to build a content-based image retrieval system on a GPU cluster, i.e., to search for images using pictures rather than words. In essence, by simply taking a snapshot of any landmark of interest, the application would determine the landmark captured in the photo and display corresponding information. The project involves the program development on an Android mobile phone for taking photo and retrieving search results from a web portal, and the backend GPU cluster for performing high-speed image feature extraction, indexing, and on-line search.

 

Max no. of students: 3

Prerequisite: CUDA Programming, familiar with Google's Android Platform