QoS Aware Function Scheduling in Serverless Computing
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
ASTU
Abstract
It is challenging when users manage cloud environments, such as availability, load
balancing, auto-scaling, monitoring, etc. These difficulties have driven the development of
a new cloud computing approach known as serverless cloud computing. The serverless
model differs from other computing models by shifting the responsibility for server
management entirely to the provider, effectively making the model serverless from a
developer's perspective. Many applications have begun to employ serverless computing
platforms in recent years, owing to their ease of deployment and cost-effectiveness.
However, traditional serverless platform scheduling algorithms fall short of responding to
the particular characteristics of such programs, which include burstiness, short and
unpredictable execution periods, statelessness, resource utilization, system throughput, cold
start rate, and QoS satisfaction. The existing techniques, in particular, fall short of
addressing the needs imposed by the combined effect of these characteristics: scheduling
millions of function invocations per second while maintaining predictable performance. To
address these difficulties, we propose an execution time and load aware scheduler (ETLAS)
that schedules function for Serverless computing. It is a hybrid scheduling discipline that
determines the order of function executions based on the function's predicted execution time
and arrival time, which has a notable impact on worker node latencies and throughputs.
Every worker node's queuing delay is precisely calculated to estimate how many containers
are required, and if there are not enough containers to handle all the queued requests,
reactive container spawning is used to prevent SLO violations caused by queuing delays.
We implemented ETLAS in Apache OpenWhisk and show that ETLAS outperforms others by
reducing average waiting time by 34% and increasing throughput by 42% compared to the
OpenWhisk worker scheduler and multiple queue scheduling schemes.
