Hence both the geometric deformation and distribution of key facial gestures are specially adapted for each user. The most indicative facial gestures are identified and extracted from the facial response video, and the association between gesture and affect labels is determined by the distribution of the gesture over all reported affects. PADMA involves a training/calibration phase in which the user watches short video segments and reports the affect that best describes his/her overall feeling throughout the segment. PADMA uses a novel Association-based Multiple Instance Learning (AMIL) method, which learns a personal facial affect model through expression frequency analysis, and does not need expert input or frame-based annotation. The alternative is a user-dependent approach, but it would be prohibitively expensive to collect and annotate data for each user. It is susceptible to user variability and accommodating individual differences is difficult. The conventional approach relies on the use of key frames in recorded affect sequences and requires an expert observer to identify and annotate the frames. This paper presents PADMA (Personalized Affect Detection with Minimal Annotation), a user-dependent approach for identifying affective states from spontaneous facial expressions without the need for expert annotation. Fast-FACS reduced manual coding time by nearly 50% and demonstrated strong concurrent validity with manual FACS coding. The system was tested in the RU-FACS database, which consists of natural facial behavior during a twoperson interview. Three are the main novelties of the system: (1) to the best of our knowledge, this is the first paper to predict onsets and offsets from peaks, (2) use Active Appearance Models for computer assisted FACS coding, (3) learn an optimal metric to predict onsets and offsets from peaks. This paper proposes Fast-FACS, a computer vision aided system that improves speed and reliability of FACS coding. Success of this effort depends in part on access to reliably coded corpora however, manual FACS coding remains expensive and slow. A goal of automated FACS coding is to eliminate the need for manual coding and realize automatic recognition and analysis of facial actions. FACS coding, however, is labor intensive and difficult to standardize. FACS (Facial Action Coding System) coding is the state of the art in manual measurement of facial actions.