Black History Month is here! Discover ERA research focused on Black experiences in Canada and worldwide. Use our general search below to get started!

On Policy Decisions of Polymorphic Inline Caches in Dynamically-Typed Language Implementations

Loading...
Thumbnail Image

Institution

http://id.loc.gov/authorities/names/n79058482

Degree Level

Master's

Degree

Master of Science

Department

Department of Computing Science

Supervisor / Co-Supervisor and Their Department(s)

Citation for Previous Publication

Link to Related Item

Abstract

Dynamically-typed languages running on Virtual Machines (VMs) are commonly used, but the lack of explicit type information poses a challenge to producing efficient code. In general, without type annotations, it is impossible to statically infer an object's type to determine which methods to invoke or how properties are accessed. Inline caches (ICs) are a widely adopted technique to improve the performance of dynamically-typed languages. ICs store machine code stubs at the bytecode level to enable fast-path execution for previously seen types. However, highly polymorphic sites require a large number of fast paths, leading to more frequent code generation and a higher runtime cost to select the correct fast path for an incoming type. Therefore, implementations often set a limit on the number of IC fast paths for a bytecode. Once this limit is reached, type-specialized fast paths are forgotten, and instead, the IC executes a type-generic routine.

The central goal of this thesis is to investigate and evaluate alternative techniques to handle high degrees of polymorphism in operations that use inline caches. This thesis introduces "Stub Folding," a technique that increases the efficiency of highly polymorphic ICs. Stub Folding allows certain ICs to retain type-specialized fast paths that would otherwise be lost, enabling higher code coverage for compiler optimizations and accelerating lower execution tiers. An implementation of Stub Folding in the SpiderMonkey JavaScript engine achieves up to 25% improvement on complex applications within the JetStream 2.1 benchmark suite compared to SpiderMonkey’s previous approach. This thesis also explores techniques inspired by hardware caching policies, namely Least Recently Used (LRU) and Least Frequently Used (LFU) replacement policies. An evaluation indicates that LRU and LFU policies accelerate some programs but do not reliably increase program efficiency across a range of benchmarks.

Item Type

http://purl.org/coar/resource_type/c_46ec

Alternative

License

Other License Text / Link

This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.

Language

en

Location

Time Period

Source