Many computer programs, especially those involving scientific computing, are long running and rely on parallel processing. The long run times, as well as the increased probability of hardware failures as the number of processors increases and semiconductor feature sizes shrink, demand a high level of recoverability from hardware failures. To address this, we describe a novel approach to parallel programming based on the large grain dataflow model of computing. This approach provides a number of fault-tolerance features, including two forms of application-transparent rollback recovery, process restart and distributed checkpoint/rollback. We describe a simulator for a large grain dataflow system named COSMOS that was originally developed at NASA s Jet Propulsion Laboratory and was based on a distributed-memory architecture. Using the COSMOS simulator, performance comparisons and tradeoffs are made between process restart and checkpoint/rollback, and an analytical model is developed to validate the empirical results. This is then used to predict the behavior of COSMOS programs in a multi-core environment, with very favorable results.