Date of Award

Fall 11-18-2011

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Applied Linguistics and English as a Second Language

First Advisor

Sara Weigle

Second Advisor

Scott Crossley

Third Advisor

Viviana Cortes

Fourth Advisor

YouJin Kim

Abstract

This study was conducted to compare the writing performance (writing products and writing processes) of the TOEFL iBT integrated writing task (writing from source texts) with that of the TOEFL iBT independent writing task (writing from prompt only). The study aimed to find out whether writing performance varies with task type, essay scores, and academic experience of test takers, thus clarifying the link between the expected scores and the underlying writing abilities being assessed. The data for the quantitative textual analysis of written products was provided by Educational Testing Service (ETS). The data consisted of scored integrated and independent essays produced by 240 test takers. Coh-Metrix (an automated text analysis tool) was used to analyze the linguistic features of the 480 essays. Statistic analysis results revealed the linguistic features of the essays varied with task type and essay scores. However, the study did not find significant impact of the academic experience of the test takers on most of the linguistic features investigated. In analyzing the writing process, 20 English as a second language students participated in think-aloud writing sessions. The writing tasks were the same tasks used in the textual analysis section. The writing processes of the 20 participants was coded for individual writing behaviors and compared across the two writing tasks. The writing behaviors identified were also examined in relation to the essay scores and the academic experience of the participants. Results indicated that the writing behaviors varied with task type but not with the essay scores or the academic experience of the participants in general. Therefore, the results of the study provided empirical evidence showing that the two tasks elicited different writing performance, thus justifying the concurrent use of them on a test. Furthermore, the study also validated the scoring rubrics used in evaluating the writing performance and clarified the score meaning. Implications of the current study were also discussed.

DOI

https://doi.org/10.57709/2372352

Share

COinS