To determine the largest value of \( x \) that would make the given list of points not represent a function, we need to recall the definition of a function. A set of points is considered a function if no two different points have the same \( x \)-coordinate. In other words, each input (or \( x \)-value) must correspond to exactly one output (or \( y \)-value).
Given the points:
- \( (x, 15) \)
- \( (9, 16) \)
- \( (18, 21) \)
For the list of points to not be a function, the point \( (x, 15) \) must share the same \( x \)-coordinate as one of the other points: either \( (9, 16) \) or \( (18, 21) \).
From the points we have:
- The first point has an \( x \)-coordinate of \( x \).
- The second point has an \( x \)-coordinate of \( 9 \).
- The third point has an \( x \)-coordinate of \( 18 \).
To make the set of points not a function, we can set \( x = 9 \) or \( x = 18 \). The largest of these two values is \( 18 \).
However, simply matching \( x \) with \( 18 \) would lead to the duplicate point \( (18, 15) \) and \( (18, 21) \), which would also violate the function principle.
Thus, the largest value of \( x \) that would make the list of points not represent a function is:
\[ \boxed{18} \]