1. Reliability
  2. 2. Semigroups
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7
  10. 8

7. Semigroup Products

Basic Theory

In this section we consider semigroups relative to underlying measurable spaces (S,S) and (T,T). Recall that the product measurable space is (S×T,S×T). We will use (and concatenation) generically as the semigroup operator, regardless of the underlying base set, but the semigroup under consideration should be clear from context.

Suppose that (S,) and (T,) are measurable semigroups. The direct product is the semigroup (S×T,) with the binary operation defined by (x,y)(u,v)=(xu,yv)

Details:

We need to show that the product space is a semigroup and satisfies the assumptions we have imposed. Let (u,v),(x,y),(z,w)S×T. Then [(x,y)(u,v)](z,w)=(xu,yv)(z,w)=(xuz,yvw)=(x,y)(uv,zw)=(x,y)[(u,v)(z,w)] so the associative property holds. Next suppose that (u,v)(x,y)=(u,v)(z,w). Then ux=uz and vy=vw. Hence x=z and y=w so (x,y)=(z,w). Therefore the left cancellation law holds. Finally, [(x,y),(u,v)](x,y)(u,v)=(xu,yv) is measurable since (x,u)xu and (y,v)yv are measurable. The graph (S×T,=) is measurable since (S,=) and (T,=) are measurable.

Note that if (x,y)S×T, AS, and BT then (x,y)(A×B)=(xA)×(yB)(x,y)1(A×B)=(x1A)×(y1B) Of special importance is the case where (S,S,)=(T,T,) so that the direct product (S2,) is the direct power of order 2.

Suppose again that (S,) and (T,) are semigroups with associated graphs (S,) and (T,), respectively. Then the graph (S×T,) associated with the product semigroup (S×T,) is the direct product of the graphs (S,) and (T,).

Details:

Let (u,v),(x,y)S×T. By definition, (u,v)(x,y) if and only if (x,y)(u,v)(S×T)=(uS)×(vT) if and only if xuS and yvT if and only if ux and vy.

So all of the results in Section 1.8 on direct products of graphs apply to direct products of semigroups.

Suppose again that (S,) and (T,) are semigroups with direct product (S×T,).

  1. If (S,) and (T,) are positive semigroups, then so is (S×T,).
  2. If (S,) or (T,) is a strict positive semigroup, then so is (S×T,).
Details:
  1. Let eS and ϵT denote the identity elements of (S,) and (T,) respectively. Then for (x,y)S×T, (x,y)(e,ϵ)=(xe,yϵ)=(x,y) and (e,ϵ)(x,y)=(ex,ϵy)=(x,y). So (e,ϵ) is the identity for (S×T,). Suppose now that (u,v),(x,y)(S×T)+=(S×T){(e,ϵ)}. Then either xe or yϵ. In the first case, uxu and in the second case vyv. In both cases, (u,v)(x,y)=(ux,vy)(u,v).
  2. Suppose again that (x,y),(u,v)S×T. Then (u,v)(x,y)=(ux,vy). Since one of the semigroups is strictly positive, either uxu or vyv. In both cases, (u,v)(x,y)(u,v).

In part (b), the strict positive semigroup (S×T,) can be made into a positive semigroup with the addition of an identity element (e,ϵ) as described in Section 1.

If (S,) and (T,) are right zero semigroups then so is the direct product (S×T,).

Details:

By definition, (u,v)(x,y)=(ux,vy)=(x,y),(u,v),(x,y)S×T

For the next result, suppose that μ and ν are σ-finite measures on (S,S) and (T,T) respectively. Recall that μ×ν denotes the product measure on (S×T,S×T), also σ-finite.

If μ and ν are left-invariant for (S,) and (T,), respectively, then μ×ν is left invariant for (S×T,).

Details:

For xS, yT, AS, and BT, (μ×ν)[(x,y)(A×B)]=(μ×ν)(xA×yB)=μ(xA)ν(yB)=μ(A)ν(B)=(μ×ν)(A×B) Therefore, for fixed (x,y)S×T, the measures C(μ×ν)[(x,y)C] and C(μ×ν)(C) on (S×T,S×T) agree on the measurable rectangles A×B where AS and BT. Hence, these measures must agree on all of S×T, and hence μ×ν is left-invariant for (S×T,).

Suppose now that (S,) and (T,) are positive semigroups with identity elements e and ϵ, respectively, and that the left-invariant measures μ and ν are unique, up to multiplication by positive constants. We show that μ×ν has the same property. Let C(T)={BT:ν(T)(0,)} and suppose that λ is a σ-finite, left-invariant measure for (S×T,). For CC(T), define μC(A)=λ(A×C),AS Then μC is a regular measure on S (although it may not have support S). Moreover, for xS and AS, μC(xA)=λ(xA×C)=λ[(x,ϵ)(A×C)]=λ(A×C)=μC(A) so μC is left invariant for (S,). It follows that for each CC(T), there exists ρ(C)[0,) such that μC=ρ(C)μ; that is, λ(A×C)=μ(A)ρ(C),AB(S),CC(T) Fix AS with μ(A)(0,). If C,DC(T) and CD then μ(A)ρ(C)=λ(A×C)λ(A×D)=μ(A)ρ(D) so ρ(C)ρ(D). If C,DC(T) are disjoint then μ(A)ρ(CD)=λ[A×(CD)]=λ[(A×C)(A×D)]=λ(A×C)+λ(A×D)=μ(A)ρ(C)+μ(A)ρ(D) so ρ(CD)=ρ(C)+ρ(D). If C,DC(T) then μ(A)ρ(CD)=λ[A×(CD)]=λ[(A×C)(A×D)]λ(A×C)+λ(A×D)=μ(A)ρ(C)+μ(A)ρ(D) so ρ(CD)ρ(C)+ρ(D). Thus, ρ is a content in the sense of Halmos, and hence can be extended to a regular measure on T (which we will continue to call ρ). Thus from the equation above we have λ(A×C)=(μ×ρ)(A×C),AS,BC(T) By regularity, it follows that λ=μ×ρ. Again fix AS with 0<μ(A)<. If yT and BT then μ(A)ρ(yB)=λ(A×yB)=λ[(e,y)(A×B)]=λ(A×B)=μ(A)ρ(B) so it follows that ρ(yB)=ρ(B) and hence ρ is left-invariant for (T,). Thus, ρ=cν for some positive constant c and so λ=c(μ×ν). Therefore μ×ν is the unique left-invariant measure for (S×T,) , up to multiplication by positive constants.

The following example is the semigroup version of the graph first studied in Section 1.2 and then again in Section 1.8.

Suppose that I is a set with kN+ elements. Let (N+×I,) be the direct product of (N+,+) and the right zero semigroup (I,). That is, (m,i)(n,j)=(m+n,j) for (m,i),(n,j)N+×I. Then

  1. (N+×I,) is a strict positive semigroup.
  2. The associated strict partial order graph (N+×I,) is the direct product of (N+,<) and (I,), the complete reflexive graph on I. That is (m,i)(n,j) if and only if m<n for (m,i),(n,j)N+×I.
Details:

The results follow from [2] and [3]. Note that (N+,<) is a strict positive semigroup.

Probability

Naturally our interest is the relationship between memoryless and exponential distributions for the individual semigroups (S,) and (T,), and for the product semigroup (S×T,).

Suppose that random variable X has an exponential distribution for (S,), random variable Y has an exponential distribution for (T,), and that X and Y are independent. Then (X,Y) has an exponential distribution for the product semigroup (S×T,)

Details:

If AS, BT, and (x,y)S×T then P[(X,Y)(x,y)(A×B)]=P(XxA,YyB)=P(XxA)P(YyB)=P(XxS)P(XA)P(YyT)P(YB)=P(XxS,YyT)P(XA,YB)=P[(X,Y)(x,y)(S×T)]P[(X,Y)A×B],(x,y)S×T Hence for fixed (x,y)S×T, the finite measures on S×T given by CP[(X,Y)(x,y)C]CP[(X,Y)(x,y)(S×T)]P[(X,Y)C] agree on the measurable rectangles A×B where AS and BB. Hence these measures agree on S×T and so (X,Y) is exponential for (S×T,).

Suppose that (S,) has identity element e and that (T,) has identity element ϵ. Then (X,Y) is memoryless for (S×T,) if and only if X is memoryless for (S,), Y is memoryless for (T,), and X,Y are right independent.

Details:

Let F, G, and H denote the reliability functions for X, Y, and (X,Y) with respect to the semigroups (S,), (T,) and (S×T,) respectively. Suppose first that (X,Y) is memoryless for (S×T,). Then from Section 1.8, F(ux)=H(ux,ϵ)=H[(u,ϵ)(x,ϵ)]=H(u,ϵ)H(x.ϵ)=F(u)F(x),u,xS So X is memoryless for (S,) By a symmetric argument, Y is memoryless for (T,). Next note that H(x,y)=H[(x,ϵ)(e,y)]=H(x,ϵ)H(e,y)=F(x)G(y),xS,yT so X and Y are right independent. Conversely, suppose that X and Y are memoryless and are right independent. Then H[(u,v)(x,y)]=H(ux,vy)=F(ux)G(vy)=F(u)F(x)G(v)G(y)=[F(u)G(v)][F(x)G(y)]=H(u,v)H(x,y),(u,v),(x,y)S×T Hence (X,Y) is memoryless for (S×T,).

The following result is a partial converse to [7].

Suppose again that (S,) has identity element e and that (T,) has identity element ϵ. If (X,Y) is exponential for (S×T,) then X is exponential for (S,), Y is exponential for (T,), and X and Y are right independent.

Details:

Since (X,Y) is exponential for (S×T,), P(XxA)=P[(X,Y)xA×T]=P[(X,Y)(x,ϵ)(A×T)]=P[(X,Y)(x,ϵ)(S×T)]P[(X,Y)A×T]=P(XxS)P(XA),xS,AS Hence X is exponential for (S,). By a symmetric argument, Y is exponential for (T,). Finally, since (X,Y) is exponential for (S×T,), it is also memeoryless and hence X and Y are right independent by [8].

From Section 5, the random variable (X,Y) in [9] has constant rate δ(0,) for (S×T,) with respect to a left-invariant measure λ on (S×T,S×T). Hence (X,Y) has density function h given by h(x,y)=δH(x,y)=δF(x)G(y). But we cannot conclude that that X and Y are fully independent since we don't know that λ is a product measure on (S×T,S×T). Note that the canonical such measure λ is given by λ(A×B)=E[1H(X,Y);(X,Y)A×B]=E[1F(X)G(Y);XA,YB] But we cannot factor the expression further without full independence of X and Y. However, we have the following corollary:

In the setting of [9], suppose that λ=μ×ν is the unique left-invariant measure for (S×T,S×T), up to multiplication by positive constants, where μ is left invariant for (S,) and ν is left invariant for (T,). Then (X,Y) is exponential for (S×T,) if and only if X is exponential for (S,), Y is exponential for (T,), and X and Y are fully independent.

Details:

Suppose that (X,Y) has an exponential distribution for (S×T,). Then by [9], X is exponential for (S,), Y is exponential for (T,), and X,Y are right independent. But as in the remarks above, (X,Y) has joint density h with respect to λ=μ×ν of the from h(x,y)=δF(x)G(y) where δ(0,) and where F and G are the reliability functions of X and Y for (S,) and (T,), respectively. From the factorization theorem, X and Y are independent.

The direct product (S×T,) has several natural sub-semigroups. First, ({(x,ϵ):xS},) is a complete sub-semigroup isomorphic to (S,). Similalry, ({(e,y):yT},) is a complete sub-semigroup isomorphic to (T,). If (S,)=(T,), then the diagonal ({(x,x):xS},) is a complete sub-semigroup isomorphic to (S,). The results of this subsection apply to positive semigroups of course since such semigroups have identities. Next we continue with example [6] and give the exponential version of the constant rate distributions studied in Section 1.8.

In the setting of example [6], suppose that random variable X has the geometric distribution on N+ with success parameter p(0,1), random variable Y is uniformly distributed on I, and that X and Y are independent. Then (X,Y) has an exponential distribution for (N+×I,). The rate constant for (N+×I,) is p/[k(1p)].

Details:

Random variable X has an exponential distributin for (N+,+) and has rate p/(1p) for the associated graph (N+,<). Similarly, random variable Y has an exponential distribution for (I,) and has rate 1/k for the associated graph (I,). So the result follows from [7].

Higher Order Products

Naturally, the results above can be extended to the direct product of n semigroups (S1,),(S2,),(Sn,) for nN+, and in particular to the n-fold direct power (Sn,) of a semigroup (S,). In the latter case, if λ is left invariant for (S,) then λn is left invariant for (Sn,) for each nN+. The following definition gives an infinite construction that will be useful.

Suppose that (Si,) is a discrete semigroup with identity element ei for iN+. Let T={(x1,x2,):xiSi for each i and xi=ei for all but finitely many iN+} As before, we define the component-wise operation: xy=(x1y1,x2y2,),x=(x1,x2,),y=(y1,y2,)T Then (T,) is a discrete semigroup with identity e=(e1,e2,).

Details:

Note that T is closed under the operation, and is countable by the requirement that xi=ei for all but finitely many iN+ for x=(x1,x2,)T. The other semigroup properties follow just as in [1].

In particular, if (Si,) is a positive semigroup for each iN+ then (T,) is also a positive semigroup.

Marshall-Olkin Distributions

In this subsection we generalize the multivariate exponential distribution defined and studied by Marshall and Olkin. To set the stage, suppose that (S,) is a positive semigroup with identity e whose associated partial order graph (S,) is a lattice. For nN+, (Sn,) denotes the power semigroup of (S,) of order n, whose partial order graph (Sn,n) is the power of (S,) of order n, also a lattice. Once again, (S,) is measurable with respect to underlying reference space (S,S), so that (Sn,) and the graph (Sn,n) are measurable with respect to (Sn,Sn).

The Bivariate Case

We start with our generalized definition in the bivariate case.

Suppose that U, V, and W are right independent and have memoryless distributions on (S,). Let X=UW and Y=VW. Then (X,Y) has a Marshall-Olkin distribution on (S2,).

Our first result follows immediatley from the definition and a basic result from Section 6.

Suppose that (X,Y) has a Marshall-Olkin distribution on (S2,) as in definition [13]. Let F1, F2, and F3 denote the reliability functions of U, V, and W on (S,) respectively. Then

  1. X=UW is memoryless with reliability function F1F3.
  2. Y=VW is memoryless with reliability function F2F3.
  3. XY=UVW is memoryless with reliability function F1F2F3.

But of course, X and Y are dependent. Moreover, a Marshall-Olkin distribution places positive probability on the diagonal.

Suppose that (X,Y) has a Marshall-Olkin distribution on (S2,). Then P(X=Y)>0.

Details:

Suppose that X=UW and Y=VW as in definition [13]. Then WU,WVX=W,Y=WX=Y Hence P(X=Y)P(UW,VV)=E[P(UW,VWW)]=E[P(UWW)P(VWW)]=E[F1(W)F2(W)] From our usual support assumption, the right hand side is positive.

Suppose that λ is the left-invariant reference measure for (S,), so that λ2 is the left-invariant measure for (S2,). In the continuous case with S uncountable, we typically have λ2{(x,x):xS}=0, so a Marshall-Olkin distribution has an absolutely continuous part and a singular part.

Suppose that (X,Y) has a Marshall-Olkin distribution on (S2,) as in definition [13], with reliability function H. Then H(x,y)=F1(x)F2(y)F3(xy),(x,y)S2

Details

By definition and right independence H(x,y)=P(UWx,VWy)=P(Ux,Wx,Vy,Wy)=P(Ux,Vy,Wxy)=P(Ux)P(Vy)P(Wxy)=F1(x)F2(y)F3(xy),(x,y)S2

Our next result is the abstract version of one of the original characterizations of the Marshall-Olkin distribution.

Suppose again that (X,Y) has a Marshall-Olkin distribution on (S2,) as in definition [13], with reliability function H. Then (X,Y) satisfies the partial memoryless property H[(t,t)(x,y)]=H(t,t)H(x,y),x,y,tS

Details

Let t,x,yS. Using [15] we have H[(t,t)(x,y)]=H(tx,ty)=F1(tx)F2(ty)F3[(tx)(ty)]=F1(tx)F2(ty)F3[t(xy)] But since U, V, and W are memoryless, H[(t,t)(x,y)]=F1(t)F1(x)F2(t)F2(y)F3(t)F3(xy)=H(t,t)H(x,y)

Stated in terms of conditional probability, the partial memoryless property has the form P(Xtx,YtyXt,Yt)=P(Xx,Yy),x,y,tS

The General Multivariate Case

The extension of the Marshall-Olkin distribution to higher dimensions is a bit complicated and requires some additional notation to state the definition and results cleanly. For nN+ let Bn denote the set of bit strings of length n, excluding the 0 string 000.

Suppose that nN+ and that {Zb:bBn} is a collection of right independent variables, each memoryless on (S,). Define Xi=inf{Zb:bBn,bi=1},i{1,2,,n} Then (X1,X2,,Xn) has the Marshall-Olkin distribution on (Sn,).

So a collection of 2n1 right independent, memoryless variables on (S,) is required for the construction of the Marshall-Olkin variable on (Sn,). The marginal distributions are of the same type. For the following results, let Fb denote the reliability function of Zb for bBn.

Suppose again that nN+ and that X=(X1,X2,,Xn) has a Marshall-Olkin distribution on (Sn,) as in definition [18]. For kN+ with kn, let (j1,j2,,jk) be a subsequence of (1,2,,n). Then

  1. (Xj1,Xj2,,Xjk) has a Marshall-Olkin distribution on (Sk,).
  2. inf{Xj1,Xj2,,Xjk} is memoryless on (S,) with reliability function {Fb:bBn,bj1=bj2==bjk=1}

Suppose again that nN+ and that X=(X1,X2,,Xn) has a Marshall-Olkin distribution on (Sn,) as in definition [18]. Let H denote the reliability function of X on (Sn,). Then H(x1,x2,,xn)=bBnFb(sup{xi:bi=1}),(x1,x2,,xn)Sn

The generalization of the partial memoryless property is straightforward.

Suppose again that nN+ and that X=(X1,X2,,Xn) has a Marshall-Olkin distribution on (Sn,) as in definition [18]. Then X has the partial memoryless property H[(t,t,,t)(x1,x2,,xn)]=H(t,t,t)H(x1,x2,,xn),tS,(x1,x2,,xn)Sn

We will revisit Marshall-Olkin distributions for the standard continous semigroup ([0,),+) in Section 3.4.