Designs that do not provide good projection and support for all areas of the stage result in an imbalance of the orchestral sound or noticeable variations depending on where the performer stands. Several years ago, the author et al presented on the importance of whole stage imaging. In this paper, we focus on how a design can be assessed for its capacity to support all sections of the orchestra on stage, from the point of view of the listener. Most simulation packages start with the source definition in a 3-dimensional space. We suggest that the whole stage imaging requires the analysis to be carried out from the listener’s point of view. The paper first explores ways by which the whole-stage imaging can be represented, introducing a new visual tool that carries out the analysis for all possible source positions in real-time at once. We then present a deterministic (but tedious) approach to quantify the results. Finally, the same analysis is carried out using different
machine-learning processes that attempts at quantifying the whole-stage imaging. It inform,s in a visual way, the imbalance between orchestra sections while facilitating communication with the project architects.
Keywords: Performing Arts, Ray-Tracing, Whole-Stage, Machine Learning